00:00:00.001 Started by upstream project "autotest-per-patch" build number 132104 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.308 The recommended git tool is: git 00:00:00.309 using credential 00000000-0000-0000-0000-000000000002 00:00:00.311 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.324 Fetching changes from the remote Git repository 00:00:00.326 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.338 Using shallow fetch with depth 1 00:00:00.338 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.338 > git --version # timeout=10 00:00:00.350 > git --version # 'git version 2.39.2' 00:00:00.351 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.364 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.364 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.995 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.011 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.024 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.024 > git config core.sparsecheckout # timeout=10 00:00:06.037 > git read-tree -mu HEAD # timeout=10 00:00:06.054 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.073 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.073 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.158 [Pipeline] Start of Pipeline 00:00:06.173 [Pipeline] library 00:00:06.177 Loading library shm_lib@master 00:00:06.177 Library shm_lib@master is cached. Copying from home. 00:00:06.191 [Pipeline] node 00:00:06.198 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.199 [Pipeline] { 00:00:06.206 [Pipeline] catchError 00:00:06.207 [Pipeline] { 00:00:06.215 [Pipeline] wrap 00:00:06.220 [Pipeline] { 00:00:06.225 [Pipeline] stage 00:00:06.226 [Pipeline] { (Prologue) 00:00:06.455 [Pipeline] sh 00:00:06.731 + logger -p user.info -t JENKINS-CI 00:00:06.746 [Pipeline] echo 00:00:06.748 Node: WFP16 00:00:06.754 [Pipeline] sh 00:00:07.047 [Pipeline] setCustomBuildProperty 00:00:07.059 [Pipeline] echo 00:00:07.060 Cleanup processes 00:00:07.064 [Pipeline] sh 00:00:07.337 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.337 4073057 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.349 [Pipeline] sh 00:00:07.626 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.626 ++ grep -v 'sudo pgrep' 00:00:07.626 ++ awk '{print $1}' 00:00:07.626 + sudo kill -9 00:00:07.626 + true 00:00:07.637 [Pipeline] cleanWs 00:00:07.644 [WS-CLEANUP] Deleting project workspace... 00:00:07.644 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.650 [WS-CLEANUP] done 00:00:07.654 [Pipeline] setCustomBuildProperty 00:00:07.665 [Pipeline] sh 00:00:07.940 + sudo git config --global --replace-all safe.directory '*' 00:00:08.022 [Pipeline] httpRequest 00:00:08.395 [Pipeline] echo 00:00:08.397 Sorcerer 10.211.164.101 is alive 00:00:08.405 [Pipeline] retry 00:00:08.407 [Pipeline] { 00:00:08.417 [Pipeline] httpRequest 00:00:08.420 HttpMethod: GET 00:00:08.421 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.421 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.448 Response Code: HTTP/1.1 200 OK 00:00:08.449 Success: Status code 200 is in the accepted range: 200,404 00:00:08.449 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:30.785 [Pipeline] } 00:00:30.801 [Pipeline] // retry 00:00:30.810 [Pipeline] sh 00:00:31.092 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:31.108 [Pipeline] httpRequest 00:00:31.703 [Pipeline] echo 00:00:31.705 Sorcerer 10.211.164.101 is alive 00:00:31.713 [Pipeline] retry 00:00:31.716 [Pipeline] { 00:00:31.729 [Pipeline] httpRequest 00:00:31.734 HttpMethod: GET 00:00:31.734 URL: http://10.211.164.101/packages/spdk_81757caea38f0747174a65971a41052ea3bef860.tar.gz 00:00:31.734 Sending request to url: http://10.211.164.101/packages/spdk_81757caea38f0747174a65971a41052ea3bef860.tar.gz 00:00:31.740 Response Code: HTTP/1.1 200 OK 00:00:31.741 Success: Status code 200 is in the accepted range: 200,404 00:00:31.741 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_81757caea38f0747174a65971a41052ea3bef860.tar.gz 00:04:20.759 [Pipeline] } 00:04:20.775 [Pipeline] // retry 00:04:20.782 [Pipeline] sh 00:04:21.064 + tar --no-same-owner -xf spdk_81757caea38f0747174a65971a41052ea3bef860.tar.gz 00:04:25.260 [Pipeline] sh 00:04:25.542 + git -C spdk log --oneline -n5 00:04:25.542 81757caea accel/error: fix callback type for tasks in a sequence 00:04:25.542 415f71d95 accel/error: don't submit tasks intended to fail 00:04:25.542 66195447a accel/error: move interval check to a function 00:04:25.542 3931ccfff accel/error: check interval before submission 00:04:25.542 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:04:25.552 [Pipeline] } 00:04:25.565 [Pipeline] // stage 00:04:25.573 [Pipeline] stage 00:04:25.575 [Pipeline] { (Prepare) 00:04:25.592 [Pipeline] writeFile 00:04:25.608 [Pipeline] sh 00:04:25.889 + logger -p user.info -t JENKINS-CI 00:04:25.901 [Pipeline] sh 00:04:26.183 + logger -p user.info -t JENKINS-CI 00:04:26.195 [Pipeline] sh 00:04:26.478 + cat autorun-spdk.conf 00:04:26.478 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.478 SPDK_TEST_NVMF=1 00:04:26.478 SPDK_TEST_NVME_CLI=1 00:04:26.478 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.478 SPDK_TEST_NVMF_NICS=e810 00:04:26.478 SPDK_TEST_VFIOUSER=1 00:04:26.478 SPDK_RUN_UBSAN=1 00:04:26.478 NET_TYPE=phy 00:04:26.484 RUN_NIGHTLY=0 00:04:26.489 [Pipeline] readFile 00:04:26.513 [Pipeline] withEnv 00:04:26.516 [Pipeline] { 00:04:26.529 [Pipeline] sh 00:04:26.814 + set -ex 00:04:26.814 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:26.814 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.814 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.814 ++ SPDK_TEST_NVMF=1 00:04:26.814 ++ SPDK_TEST_NVME_CLI=1 00:04:26.814 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.814 ++ SPDK_TEST_NVMF_NICS=e810 00:04:26.814 ++ SPDK_TEST_VFIOUSER=1 00:04:26.814 ++ SPDK_RUN_UBSAN=1 00:04:26.814 ++ NET_TYPE=phy 00:04:26.814 ++ RUN_NIGHTLY=0 00:04:26.814 + case $SPDK_TEST_NVMF_NICS in 00:04:26.814 + DRIVERS=ice 00:04:26.814 + [[ tcp == \r\d\m\a ]] 00:04:26.814 + [[ -n ice ]] 00:04:26.814 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:26.814 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:26.814 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:26.814 rmmod: ERROR: Module irdma is not currently loaded 00:04:26.814 rmmod: ERROR: Module i40iw is not currently loaded 00:04:26.814 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:26.814 + true 00:04:26.814 + for D in $DRIVERS 00:04:26.814 + sudo modprobe ice 00:04:26.814 + exit 0 00:04:26.823 [Pipeline] } 00:04:26.839 [Pipeline] // withEnv 00:04:26.845 [Pipeline] } 00:04:26.859 [Pipeline] // stage 00:04:26.867 [Pipeline] catchError 00:04:26.868 [Pipeline] { 00:04:26.880 [Pipeline] timeout 00:04:26.880 Timeout set to expire in 1 hr 0 min 00:04:26.882 [Pipeline] { 00:04:26.896 [Pipeline] stage 00:04:26.898 [Pipeline] { (Tests) 00:04:26.913 [Pipeline] sh 00:04:27.194 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:27.194 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:27.194 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:27.194 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:27.194 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.194 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:27.194 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:27.194 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:27.194 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:27.194 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:27.194 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:27.194 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:27.194 + source /etc/os-release 00:04:27.194 ++ NAME='Fedora Linux' 00:04:27.194 ++ VERSION='39 (Cloud Edition)' 00:04:27.194 ++ ID=fedora 00:04:27.194 ++ VERSION_ID=39 00:04:27.194 ++ VERSION_CODENAME= 00:04:27.194 ++ PLATFORM_ID=platform:f39 00:04:27.194 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:27.194 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:27.194 ++ LOGO=fedora-logo-icon 00:04:27.194 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:27.194 ++ HOME_URL=https://fedoraproject.org/ 00:04:27.194 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:27.194 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:27.194 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:27.194 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:27.194 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:27.194 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:27.194 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:27.194 ++ SUPPORT_END=2024-11-12 00:04:27.194 ++ VARIANT='Cloud Edition' 00:04:27.194 ++ VARIANT_ID=cloud 00:04:27.194 + uname -a 00:04:27.194 Linux spdk-wfp-16 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:27.194 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.725 Hugepages 00:04:29.725 node hugesize free / total 00:04:29.725 node0 1048576kB 0 / 0 00:04:29.725 node0 2048kB 0 / 0 00:04:29.725 node1 1048576kB 0 / 0 00:04:29.725 node1 2048kB 0 / 0 00:04:29.725 00:04:29.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.725 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.725 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.725 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:29.725 + rm -f /tmp/spdk-ld-path 00:04:29.725 + source autorun-spdk.conf 00:04:29.725 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.725 ++ SPDK_TEST_NVMF=1 00:04:29.725 ++ SPDK_TEST_NVME_CLI=1 00:04:29.725 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:29.725 ++ SPDK_TEST_NVMF_NICS=e810 00:04:29.725 ++ SPDK_TEST_VFIOUSER=1 00:04:29.725 ++ SPDK_RUN_UBSAN=1 00:04:29.725 ++ NET_TYPE=phy 00:04:29.725 ++ RUN_NIGHTLY=0 00:04:29.725 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:29.725 + [[ -n '' ]] 00:04:29.725 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:29.725 + for M in /var/spdk/build-*-manifest.txt 00:04:29.725 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:29.725 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.725 + for M in /var/spdk/build-*-manifest.txt 00:04:29.725 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:29.725 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.725 + for M in /var/spdk/build-*-manifest.txt 00:04:29.725 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:29.725 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.725 ++ uname 00:04:29.725 + [[ Linux == \L\i\n\u\x ]] 00:04:29.725 + sudo dmesg -T 00:04:29.725 + sudo dmesg --clear 00:04:29.725 + dmesg_pid=4074522 00:04:29.725 + [[ Fedora Linux == FreeBSD ]] 00:04:29.725 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.725 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.725 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:29.725 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:29.725 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:29.725 + [[ -x /usr/src/fio-static/fio ]] 00:04:29.725 + export FIO_BIN=/usr/src/fio-static/fio 00:04:29.725 + FIO_BIN=/usr/src/fio-static/fio 00:04:29.725 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:29.725 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:29.725 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:29.725 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.725 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.725 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:29.725 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.725 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.725 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:29.725 + sudo dmesg -Tw 00:04:29.982 12:11:01 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:29.983 12:11:01 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:29.983 12:11:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:29.983 12:11:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:29.983 12:11:01 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:29.983 12:11:01 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:29.983 12:11:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:29.983 12:11:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:29.983 12:11:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:29.983 12:11:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.983 12:11:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.983 12:11:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.983 12:11:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.983 12:11:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.983 12:11:01 -- paths/export.sh@5 -- $ export PATH 00:04:29.983 12:11:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.983 12:11:01 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:29.983 12:11:01 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:29.983 12:11:01 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730891461.XXXXXX 00:04:29.983 12:11:01 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730891461.58dFNG 00:04:29.983 12:11:01 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:29.983 12:11:01 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:29.983 12:11:01 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:29.983 12:11:01 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:29.983 12:11:01 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:29.983 12:11:01 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:29.983 12:11:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:29.983 12:11:01 -- common/autotest_common.sh@10 -- $ set +x 00:04:29.983 12:11:01 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:29.983 12:11:01 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:29.983 12:11:01 -- pm/common@17 -- $ local monitor 00:04:29.983 12:11:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.983 12:11:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.983 12:11:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.983 12:11:01 -- pm/common@21 -- $ date +%s 00:04:29.983 12:11:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.983 12:11:01 -- pm/common@21 -- $ date +%s 00:04:29.983 12:11:01 -- pm/common@25 -- $ sleep 1 00:04:29.983 12:11:01 -- pm/common@21 -- $ date +%s 00:04:29.983 12:11:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730891461 00:04:29.983 12:11:01 -- pm/common@21 -- $ date +%s 00:04:29.983 12:11:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730891461 00:04:29.983 12:11:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730891461 00:04:29.983 12:11:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730891461 00:04:29.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730891461_collect-cpu-load.pm.log 00:04:29.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730891461_collect-vmstat.pm.log 00:04:29.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730891461_collect-cpu-temp.pm.log 00:04:29.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730891461_collect-bmc-pm.bmc.pm.log 00:04:30.918 12:11:02 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:30.918 12:11:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:30.918 12:11:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:30.918 12:11:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.918 12:11:02 -- spdk/autobuild.sh@16 -- $ date -u 00:04:30.918 Wed Nov 6 11:11:02 AM UTC 2024 00:04:30.918 12:11:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:30.918 v25.01-pre-174-g81757caea 00:04:30.918 12:11:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:30.918 12:11:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:30.918 12:11:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:30.918 12:11:02 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:30.918 12:11:02 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:30.918 12:11:02 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.918 ************************************ 00:04:30.918 START TEST ubsan 00:04:30.918 ************************************ 00:04:30.918 12:11:02 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:04:30.918 using ubsan 00:04:30.918 00:04:30.918 real 0m0.000s 00:04:30.918 user 0m0.000s 00:04:30.918 sys 0m0.000s 00:04:30.918 12:11:02 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:30.918 12:11:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:30.918 ************************************ 00:04:30.918 END TEST ubsan 00:04:30.918 ************************************ 00:04:31.176 12:11:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:31.176 12:11:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:31.176 12:11:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:31.176 12:11:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:31.176 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:31.176 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:31.742 Using 'verbs' RDMA provider 00:04:44.513 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:59.391 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:59.391 Creating mk/config.mk...done. 00:04:59.391 Creating mk/cc.flags.mk...done. 00:04:59.391 Type 'make' to build. 00:04:59.391 12:11:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:04:59.391 12:11:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:59.391 12:11:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:59.391 12:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:59.391 ************************************ 00:04:59.391 START TEST make 00:04:59.391 ************************************ 00:04:59.391 12:11:29 make -- common/autotest_common.sh@1127 -- $ make -j112 00:04:59.391 make[1]: Nothing to be done for 'all'. 00:04:59.649 The Meson build system 00:04:59.649 Version: 1.5.0 00:04:59.649 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:59.649 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:59.649 Build type: native build 00:04:59.649 Project name: libvfio-user 00:04:59.649 Project version: 0.0.1 00:04:59.649 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:59.649 C linker for the host machine: cc ld.bfd 2.40-14 00:04:59.649 Host machine cpu family: x86_64 00:04:59.649 Host machine cpu: x86_64 00:04:59.649 Run-time dependency threads found: YES 00:04:59.649 Library dl found: YES 00:04:59.649 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:59.649 Run-time dependency json-c found: YES 0.17 00:04:59.649 Run-time dependency cmocka found: YES 1.1.7 00:04:59.649 Program pytest-3 found: NO 00:04:59.649 Program flake8 found: NO 00:04:59.649 Program misspell-fixer found: NO 00:04:59.649 Program restructuredtext-lint found: NO 00:04:59.649 Program valgrind found: YES (/usr/bin/valgrind) 00:04:59.649 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:59.649 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:59.649 Compiler for C supports arguments -Wwrite-strings: YES 00:04:59.649 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:59.649 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:59.649 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:59.649 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:59.649 Build targets in project: 8 00:04:59.649 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:59.649 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:59.649 00:04:59.649 libvfio-user 0.0.1 00:04:59.649 00:04:59.649 User defined options 00:04:59.649 buildtype : debug 00:04:59.649 default_library: shared 00:04:59.649 libdir : /usr/local/lib 00:04:59.649 00:04:59.649 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:00.215 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:00.474 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:00.474 [2/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:00.474 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:00.474 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:00.474 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:00.474 [6/37] Compiling C object samples/null.p/null.c.o 00:05:00.474 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:00.474 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:00.474 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:00.474 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:00.474 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:00.474 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:00.474 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:00.474 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:00.474 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:00.474 [16/37] Compiling C object samples/client.p/client.c.o 00:05:00.474 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:00.474 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:00.474 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:00.474 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:00.474 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:00.474 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:00.474 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:00.474 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:00.474 [25/37] Compiling C object samples/server.p/server.c.o 00:05:00.474 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:00.474 [27/37] Linking target samples/client 00:05:00.474 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:00.474 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:00.732 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:00.732 [31/37] Linking target test/unit_tests 00:05:00.732 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:00.732 [33/37] Linking target samples/null 00:05:00.732 [34/37] Linking target samples/lspci 00:05:00.732 [35/37] Linking target samples/server 00:05:00.732 [36/37] Linking target samples/shadow_ioeventfd_server 00:05:00.732 [37/37] Linking target samples/gpio-pci-idio-16 00:05:00.732 INFO: autodetecting backend as ninja 00:05:00.732 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:00.732 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:01.298 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:01.298 ninja: no work to do. 00:05:07.858 The Meson build system 00:05:07.858 Version: 1.5.0 00:05:07.858 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:07.858 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:07.858 Build type: native build 00:05:07.858 Program cat found: YES (/usr/bin/cat) 00:05:07.858 Project name: DPDK 00:05:07.858 Project version: 24.03.0 00:05:07.858 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:07.858 C linker for the host machine: cc ld.bfd 2.40-14 00:05:07.858 Host machine cpu family: x86_64 00:05:07.858 Host machine cpu: x86_64 00:05:07.858 Message: ## Building in Developer Mode ## 00:05:07.858 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:07.858 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:07.858 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:07.858 Program python3 found: YES (/usr/bin/python3) 00:05:07.858 Program cat found: YES (/usr/bin/cat) 00:05:07.858 Compiler for C supports arguments -march=native: YES 00:05:07.858 Checking for size of "void *" : 8 00:05:07.858 Checking for size of "void *" : 8 (cached) 00:05:07.858 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:07.858 Library m found: YES 00:05:07.858 Library numa found: YES 00:05:07.858 Has header "numaif.h" : YES 00:05:07.858 Library fdt found: NO 00:05:07.858 Library execinfo found: NO 00:05:07.858 Has header "execinfo.h" : YES 00:05:07.858 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:07.858 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:07.858 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:07.858 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:07.858 Run-time dependency openssl found: YES 3.1.1 00:05:07.858 Run-time dependency libpcap found: YES 1.10.4 00:05:07.858 Has header "pcap.h" with dependency libpcap: YES 00:05:07.858 Compiler for C supports arguments -Wcast-qual: YES 00:05:07.858 Compiler for C supports arguments -Wdeprecated: YES 00:05:07.858 Compiler for C supports arguments -Wformat: YES 00:05:07.858 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:07.858 Compiler for C supports arguments -Wformat-security: NO 00:05:07.858 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:07.859 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:07.859 Compiler for C supports arguments -Wnested-externs: YES 00:05:07.859 Compiler for C supports arguments -Wold-style-definition: YES 00:05:07.859 Compiler for C supports arguments -Wpointer-arith: YES 00:05:07.859 Compiler for C supports arguments -Wsign-compare: YES 00:05:07.859 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:07.859 Compiler for C supports arguments -Wundef: YES 00:05:07.859 Compiler for C supports arguments -Wwrite-strings: YES 00:05:07.859 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:07.859 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:07.859 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:07.859 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:07.859 Program objdump found: YES (/usr/bin/objdump) 00:05:07.859 Compiler for C supports arguments -mavx512f: YES 00:05:07.859 Checking if "AVX512 checking" compiles: YES 00:05:07.859 Fetching value of define "__SSE4_2__" : 1 00:05:07.859 Fetching value of define "__AES__" : 1 00:05:07.859 Fetching value of define "__AVX__" : 1 00:05:07.859 Fetching value of define "__AVX2__" : 1 00:05:07.859 Fetching value of define "__AVX512BW__" : 1 00:05:07.859 Fetching value of define "__AVX512CD__" : 1 00:05:07.859 Fetching value of define "__AVX512DQ__" : 1 00:05:07.859 Fetching value of define "__AVX512F__" : 1 00:05:07.859 Fetching value of define "__AVX512VL__" : 1 00:05:07.859 Fetching value of define "__PCLMUL__" : 1 00:05:07.859 Fetching value of define "__RDRND__" : 1 00:05:07.859 Fetching value of define "__RDSEED__" : 1 00:05:07.859 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:07.859 Fetching value of define "__znver1__" : (undefined) 00:05:07.859 Fetching value of define "__znver2__" : (undefined) 00:05:07.859 Fetching value of define "__znver3__" : (undefined) 00:05:07.859 Fetching value of define "__znver4__" : (undefined) 00:05:07.859 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:07.859 Message: lib/log: Defining dependency "log" 00:05:07.859 Message: lib/kvargs: Defining dependency "kvargs" 00:05:07.859 Message: lib/telemetry: Defining dependency "telemetry" 00:05:07.859 Checking for function "getentropy" : NO 00:05:07.859 Message: lib/eal: Defining dependency "eal" 00:05:07.859 Message: lib/ring: Defining dependency "ring" 00:05:07.859 Message: lib/rcu: Defining dependency "rcu" 00:05:07.859 Message: lib/mempool: Defining dependency "mempool" 00:05:07.859 Message: lib/mbuf: Defining dependency "mbuf" 00:05:07.859 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:07.859 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:07.859 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:07.859 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:07.859 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:07.859 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:07.859 Compiler for C supports arguments -mpclmul: YES 00:05:07.859 Compiler for C supports arguments -maes: YES 00:05:07.859 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:07.859 Compiler for C supports arguments -mavx512bw: YES 00:05:07.859 Compiler for C supports arguments -mavx512dq: YES 00:05:07.859 Compiler for C supports arguments -mavx512vl: YES 00:05:07.859 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:07.859 Compiler for C supports arguments -mavx2: YES 00:05:07.859 Compiler for C supports arguments -mavx: YES 00:05:07.859 Message: lib/net: Defining dependency "net" 00:05:07.859 Message: lib/meter: Defining dependency "meter" 00:05:07.859 Message: lib/ethdev: Defining dependency "ethdev" 00:05:07.859 Message: lib/pci: Defining dependency "pci" 00:05:07.859 Message: lib/cmdline: Defining dependency "cmdline" 00:05:07.859 Message: lib/hash: Defining dependency "hash" 00:05:07.859 Message: lib/timer: Defining dependency "timer" 00:05:07.859 Message: lib/compressdev: Defining dependency "compressdev" 00:05:07.859 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:07.859 Message: lib/dmadev: Defining dependency "dmadev" 00:05:07.859 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:07.859 Message: lib/power: Defining dependency "power" 00:05:07.859 Message: lib/reorder: Defining dependency "reorder" 00:05:07.859 Message: lib/security: Defining dependency "security" 00:05:07.859 Has header "linux/userfaultfd.h" : YES 00:05:07.859 Has header "linux/vduse.h" : YES 00:05:07.859 Message: lib/vhost: Defining dependency "vhost" 00:05:07.859 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:07.859 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:07.859 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:07.859 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:07.859 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:07.859 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:07.859 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:07.859 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:07.859 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:07.859 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:07.859 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:07.859 Configuring doxy-api-html.conf using configuration 00:05:07.859 Configuring doxy-api-man.conf using configuration 00:05:07.859 Program mandb found: YES (/usr/bin/mandb) 00:05:07.859 Program sphinx-build found: NO 00:05:07.859 Configuring rte_build_config.h using configuration 00:05:07.859 Message: 00:05:07.859 ================= 00:05:07.859 Applications Enabled 00:05:07.859 ================= 00:05:07.859 00:05:07.859 apps: 00:05:07.859 00:05:07.859 00:05:07.859 Message: 00:05:07.859 ================= 00:05:07.859 Libraries Enabled 00:05:07.859 ================= 00:05:07.859 00:05:07.859 libs: 00:05:07.859 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:07.859 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:07.859 cryptodev, dmadev, power, reorder, security, vhost, 00:05:07.859 00:05:07.859 Message: 00:05:07.859 =============== 00:05:07.859 Drivers Enabled 00:05:07.859 =============== 00:05:07.859 00:05:07.859 common: 00:05:07.859 00:05:07.859 bus: 00:05:07.859 pci, vdev, 00:05:07.859 mempool: 00:05:07.859 ring, 00:05:07.859 dma: 00:05:07.859 00:05:07.859 net: 00:05:07.859 00:05:07.859 crypto: 00:05:07.859 00:05:07.859 compress: 00:05:07.859 00:05:07.859 vdpa: 00:05:07.859 00:05:07.859 00:05:07.859 Message: 00:05:07.859 ================= 00:05:07.859 Content Skipped 00:05:07.859 ================= 00:05:07.859 00:05:07.859 apps: 00:05:07.859 dumpcap: explicitly disabled via build config 00:05:07.859 graph: explicitly disabled via build config 00:05:07.859 pdump: explicitly disabled via build config 00:05:07.859 proc-info: explicitly disabled via build config 00:05:07.859 test-acl: explicitly disabled via build config 00:05:07.859 test-bbdev: explicitly disabled via build config 00:05:07.859 test-cmdline: explicitly disabled via build config 00:05:07.859 test-compress-perf: explicitly disabled via build config 00:05:07.859 test-crypto-perf: explicitly disabled via build config 00:05:07.859 test-dma-perf: explicitly disabled via build config 00:05:07.859 test-eventdev: explicitly disabled via build config 00:05:07.859 test-fib: explicitly disabled via build config 00:05:07.859 test-flow-perf: explicitly disabled via build config 00:05:07.859 test-gpudev: explicitly disabled via build config 00:05:07.859 test-mldev: explicitly disabled via build config 00:05:07.859 test-pipeline: explicitly disabled via build config 00:05:07.859 test-pmd: explicitly disabled via build config 00:05:07.859 test-regex: explicitly disabled via build config 00:05:07.859 test-sad: explicitly disabled via build config 00:05:07.859 test-security-perf: explicitly disabled via build config 00:05:07.859 00:05:07.859 libs: 00:05:07.859 argparse: explicitly disabled via build config 00:05:07.859 metrics: explicitly disabled via build config 00:05:07.859 acl: explicitly disabled via build config 00:05:07.859 bbdev: explicitly disabled via build config 00:05:07.859 bitratestats: explicitly disabled via build config 00:05:07.859 bpf: explicitly disabled via build config 00:05:07.859 cfgfile: explicitly disabled via build config 00:05:07.859 distributor: explicitly disabled via build config 00:05:07.859 efd: explicitly disabled via build config 00:05:07.859 eventdev: explicitly disabled via build config 00:05:07.859 dispatcher: explicitly disabled via build config 00:05:07.859 gpudev: explicitly disabled via build config 00:05:07.859 gro: explicitly disabled via build config 00:05:07.859 gso: explicitly disabled via build config 00:05:07.859 ip_frag: explicitly disabled via build config 00:05:07.859 jobstats: explicitly disabled via build config 00:05:07.859 latencystats: explicitly disabled via build config 00:05:07.859 lpm: explicitly disabled via build config 00:05:07.859 member: explicitly disabled via build config 00:05:07.859 pcapng: explicitly disabled via build config 00:05:07.859 rawdev: explicitly disabled via build config 00:05:07.859 regexdev: explicitly disabled via build config 00:05:07.859 mldev: explicitly disabled via build config 00:05:07.859 rib: explicitly disabled via build config 00:05:07.859 sched: explicitly disabled via build config 00:05:07.859 stack: explicitly disabled via build config 00:05:07.859 ipsec: explicitly disabled via build config 00:05:07.859 pdcp: explicitly disabled via build config 00:05:07.859 fib: explicitly disabled via build config 00:05:07.859 port: explicitly disabled via build config 00:05:07.859 pdump: explicitly disabled via build config 00:05:07.859 table: explicitly disabled via build config 00:05:07.859 pipeline: explicitly disabled via build config 00:05:07.859 graph: explicitly disabled via build config 00:05:07.859 node: explicitly disabled via build config 00:05:07.859 00:05:07.859 drivers: 00:05:07.859 common/cpt: not in enabled drivers build config 00:05:07.859 common/dpaax: not in enabled drivers build config 00:05:07.859 common/iavf: not in enabled drivers build config 00:05:07.859 common/idpf: not in enabled drivers build config 00:05:07.859 common/ionic: not in enabled drivers build config 00:05:07.859 common/mvep: not in enabled drivers build config 00:05:07.859 common/octeontx: not in enabled drivers build config 00:05:07.859 bus/auxiliary: not in enabled drivers build config 00:05:07.859 bus/cdx: not in enabled drivers build config 00:05:07.860 bus/dpaa: not in enabled drivers build config 00:05:07.860 bus/fslmc: not in enabled drivers build config 00:05:07.860 bus/ifpga: not in enabled drivers build config 00:05:07.860 bus/platform: not in enabled drivers build config 00:05:07.860 bus/uacce: not in enabled drivers build config 00:05:07.860 bus/vmbus: not in enabled drivers build config 00:05:07.860 common/cnxk: not in enabled drivers build config 00:05:07.860 common/mlx5: not in enabled drivers build config 00:05:07.860 common/nfp: not in enabled drivers build config 00:05:07.860 common/nitrox: not in enabled drivers build config 00:05:07.860 common/qat: not in enabled drivers build config 00:05:07.860 common/sfc_efx: not in enabled drivers build config 00:05:07.860 mempool/bucket: not in enabled drivers build config 00:05:07.860 mempool/cnxk: not in enabled drivers build config 00:05:07.860 mempool/dpaa: not in enabled drivers build config 00:05:07.860 mempool/dpaa2: not in enabled drivers build config 00:05:07.860 mempool/octeontx: not in enabled drivers build config 00:05:07.860 mempool/stack: not in enabled drivers build config 00:05:07.860 dma/cnxk: not in enabled drivers build config 00:05:07.860 dma/dpaa: not in enabled drivers build config 00:05:07.860 dma/dpaa2: not in enabled drivers build config 00:05:07.860 dma/hisilicon: not in enabled drivers build config 00:05:07.860 dma/idxd: not in enabled drivers build config 00:05:07.860 dma/ioat: not in enabled drivers build config 00:05:07.860 dma/skeleton: not in enabled drivers build config 00:05:07.860 net/af_packet: not in enabled drivers build config 00:05:07.860 net/af_xdp: not in enabled drivers build config 00:05:07.860 net/ark: not in enabled drivers build config 00:05:07.860 net/atlantic: not in enabled drivers build config 00:05:07.860 net/avp: not in enabled drivers build config 00:05:07.860 net/axgbe: not in enabled drivers build config 00:05:07.860 net/bnx2x: not in enabled drivers build config 00:05:07.860 net/bnxt: not in enabled drivers build config 00:05:07.860 net/bonding: not in enabled drivers build config 00:05:07.860 net/cnxk: not in enabled drivers build config 00:05:07.860 net/cpfl: not in enabled drivers build config 00:05:07.860 net/cxgbe: not in enabled drivers build config 00:05:07.860 net/dpaa: not in enabled drivers build config 00:05:07.860 net/dpaa2: not in enabled drivers build config 00:05:07.860 net/e1000: not in enabled drivers build config 00:05:07.860 net/ena: not in enabled drivers build config 00:05:07.860 net/enetc: not in enabled drivers build config 00:05:07.860 net/enetfec: not in enabled drivers build config 00:05:07.860 net/enic: not in enabled drivers build config 00:05:07.860 net/failsafe: not in enabled drivers build config 00:05:07.860 net/fm10k: not in enabled drivers build config 00:05:07.860 net/gve: not in enabled drivers build config 00:05:07.860 net/hinic: not in enabled drivers build config 00:05:07.860 net/hns3: not in enabled drivers build config 00:05:07.860 net/i40e: not in enabled drivers build config 00:05:07.860 net/iavf: not in enabled drivers build config 00:05:07.860 net/ice: not in enabled drivers build config 00:05:07.860 net/idpf: not in enabled drivers build config 00:05:07.860 net/igc: not in enabled drivers build config 00:05:07.860 net/ionic: not in enabled drivers build config 00:05:07.860 net/ipn3ke: not in enabled drivers build config 00:05:07.860 net/ixgbe: not in enabled drivers build config 00:05:07.860 net/mana: not in enabled drivers build config 00:05:07.860 net/memif: not in enabled drivers build config 00:05:07.860 net/mlx4: not in enabled drivers build config 00:05:07.860 net/mlx5: not in enabled drivers build config 00:05:07.860 net/mvneta: not in enabled drivers build config 00:05:07.860 net/mvpp2: not in enabled drivers build config 00:05:07.860 net/netvsc: not in enabled drivers build config 00:05:07.860 net/nfb: not in enabled drivers build config 00:05:07.860 net/nfp: not in enabled drivers build config 00:05:07.860 net/ngbe: not in enabled drivers build config 00:05:07.860 net/null: not in enabled drivers build config 00:05:07.860 net/octeontx: not in enabled drivers build config 00:05:07.860 net/octeon_ep: not in enabled drivers build config 00:05:07.860 net/pcap: not in enabled drivers build config 00:05:07.860 net/pfe: not in enabled drivers build config 00:05:07.860 net/qede: not in enabled drivers build config 00:05:07.860 net/ring: not in enabled drivers build config 00:05:07.860 net/sfc: not in enabled drivers build config 00:05:07.860 net/softnic: not in enabled drivers build config 00:05:07.860 net/tap: not in enabled drivers build config 00:05:07.860 net/thunderx: not in enabled drivers build config 00:05:07.860 net/txgbe: not in enabled drivers build config 00:05:07.860 net/vdev_netvsc: not in enabled drivers build config 00:05:07.860 net/vhost: not in enabled drivers build config 00:05:07.860 net/virtio: not in enabled drivers build config 00:05:07.860 net/vmxnet3: not in enabled drivers build config 00:05:07.860 raw/*: missing internal dependency, "rawdev" 00:05:07.860 crypto/armv8: not in enabled drivers build config 00:05:07.860 crypto/bcmfs: not in enabled drivers build config 00:05:07.860 crypto/caam_jr: not in enabled drivers build config 00:05:07.860 crypto/ccp: not in enabled drivers build config 00:05:07.860 crypto/cnxk: not in enabled drivers build config 00:05:07.860 crypto/dpaa_sec: not in enabled drivers build config 00:05:07.860 crypto/dpaa2_sec: not in enabled drivers build config 00:05:07.860 crypto/ipsec_mb: not in enabled drivers build config 00:05:07.860 crypto/mlx5: not in enabled drivers build config 00:05:07.860 crypto/mvsam: not in enabled drivers build config 00:05:07.860 crypto/nitrox: not in enabled drivers build config 00:05:07.860 crypto/null: not in enabled drivers build config 00:05:07.860 crypto/octeontx: not in enabled drivers build config 00:05:07.860 crypto/openssl: not in enabled drivers build config 00:05:07.860 crypto/scheduler: not in enabled drivers build config 00:05:07.860 crypto/uadk: not in enabled drivers build config 00:05:07.860 crypto/virtio: not in enabled drivers build config 00:05:07.860 compress/isal: not in enabled drivers build config 00:05:07.860 compress/mlx5: not in enabled drivers build config 00:05:07.860 compress/nitrox: not in enabled drivers build config 00:05:07.860 compress/octeontx: not in enabled drivers build config 00:05:07.860 compress/zlib: not in enabled drivers build config 00:05:07.860 regex/*: missing internal dependency, "regexdev" 00:05:07.860 ml/*: missing internal dependency, "mldev" 00:05:07.860 vdpa/ifc: not in enabled drivers build config 00:05:07.860 vdpa/mlx5: not in enabled drivers build config 00:05:07.860 vdpa/nfp: not in enabled drivers build config 00:05:07.860 vdpa/sfc: not in enabled drivers build config 00:05:07.860 event/*: missing internal dependency, "eventdev" 00:05:07.860 baseband/*: missing internal dependency, "bbdev" 00:05:07.860 gpu/*: missing internal dependency, "gpudev" 00:05:07.860 00:05:07.860 00:05:07.860 Build targets in project: 85 00:05:07.860 00:05:07.860 DPDK 24.03.0 00:05:07.860 00:05:07.860 User defined options 00:05:07.860 buildtype : debug 00:05:07.860 default_library : shared 00:05:07.860 libdir : lib 00:05:07.860 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:07.860 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:07.860 c_link_args : 00:05:07.860 cpu_instruction_set: native 00:05:07.860 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:05:07.860 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:05:07.860 enable_docs : false 00:05:07.860 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:07.860 enable_kmods : false 00:05:07.860 max_lcores : 128 00:05:07.860 tests : false 00:05:07.860 00:05:07.860 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:08.118 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:08.385 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:08.385 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:08.385 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:08.385 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:08.385 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:08.385 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:08.385 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:08.385 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:08.385 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:08.385 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:08.385 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:08.385 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:08.385 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:08.646 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:08.646 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:08.646 [16/268] Linking static target lib/librte_kvargs.a 00:05:08.646 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:08.646 [18/268] Linking static target lib/librte_log.a 00:05:08.646 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:08.646 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:08.646 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:08.646 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:08.646 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:08.646 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:08.646 [25/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:08.646 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:08.646 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:08.646 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:08.646 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:08.646 [30/268] Linking static target lib/librte_pci.a 00:05:08.646 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:08.646 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:08.646 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:08.905 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:08.905 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:08.905 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:08.905 [37/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:08.905 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:08.905 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:08.905 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:08.905 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:08.905 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:08.905 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:08.905 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:08.905 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:08.905 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:08.905 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:08.905 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:08.905 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:09.164 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:09.164 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:09.164 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:09.164 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:09.164 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:09.164 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:09.164 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:09.164 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:09.164 [58/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:09.164 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:09.164 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:09.164 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:09.164 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:09.164 [63/268] Linking static target lib/librte_meter.a 00:05:09.164 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:09.164 [65/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:09.164 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:09.164 [67/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:09.164 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:09.164 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:09.164 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:09.164 [71/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:09.164 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:09.164 [73/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:09.164 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:09.164 [75/268] Linking static target lib/librte_ring.a 00:05:09.164 [76/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:09.164 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:09.164 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:09.164 [79/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.164 [80/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:09.164 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:09.164 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:09.164 [83/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:09.164 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:09.164 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:09.164 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:09.164 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:09.164 [88/268] Linking static target lib/librte_telemetry.a 00:05:09.164 [89/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:09.164 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:09.164 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:09.164 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:09.164 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:09.164 [94/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:09.164 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:09.164 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:09.164 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:09.164 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:09.164 [99/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:09.164 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:09.164 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:09.164 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:09.164 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:09.164 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:09.164 [105/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:09.164 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:09.164 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:09.164 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:09.164 [109/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.164 [110/268] Linking static target lib/librte_cmdline.a 00:05:09.164 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:09.164 [112/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:09.164 [113/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:09.164 [114/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:09.164 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:09.164 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:09.164 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:09.164 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:09.164 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:09.164 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:09.164 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:09.164 [122/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:09.164 [123/268] Linking static target lib/librte_dmadev.a 00:05:09.164 [124/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:09.164 [125/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:09.424 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:09.424 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:09.424 [128/268] Linking static target lib/librte_timer.a 00:05:09.424 [129/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:09.424 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:09.424 [131/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:09.424 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:09.424 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:09.424 [134/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:09.424 [135/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:09.424 [136/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:09.424 [137/268] Linking static target lib/librte_rcu.a 00:05:09.424 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:09.424 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:09.424 [140/268] Linking static target lib/librte_eal.a 00:05:09.424 [141/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:09.424 [142/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:09.424 [143/268] Linking static target lib/librte_mempool.a 00:05:09.424 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:09.424 [145/268] Linking static target lib/librte_net.a 00:05:09.424 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:09.424 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:09.424 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:09.424 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:09.424 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:09.424 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:09.424 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:09.424 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:09.424 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:09.424 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:09.424 [156/268] Linking static target lib/librte_mbuf.a 00:05:09.424 [157/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.424 [158/268] Linking static target lib/librte_compressdev.a 00:05:09.424 [159/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.424 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:09.424 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:09.424 [162/268] Linking static target lib/librte_power.a 00:05:09.424 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:09.424 [164/268] Linking target lib/librte_log.so.24.1 00:05:09.424 [165/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.683 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:09.683 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:09.683 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:09.683 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:09.683 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:09.683 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:09.683 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:09.683 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:09.683 [174/268] Linking static target lib/librte_reorder.a 00:05:09.683 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:09.683 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:09.683 [177/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:09.683 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:09.683 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:09.683 [180/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.683 [181/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.683 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:09.683 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:09.683 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:09.683 [185/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.683 [186/268] Linking target lib/librte_kvargs.so.24.1 00:05:09.683 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:09.683 [188/268] Linking target lib/librte_telemetry.so.24.1 00:05:09.683 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:09.684 [190/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.942 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:09.942 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:09.942 [193/268] Linking static target lib/librte_security.a 00:05:09.942 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:09.942 [195/268] Linking static target lib/librte_hash.a 00:05:09.942 [196/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:09.942 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:09.942 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:09.942 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:09.942 [200/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.942 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:09.942 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:09.942 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:09.942 [204/268] Linking static target drivers/librte_mempool_ring.a 00:05:09.942 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:09.942 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:09.942 [207/268] Linking static target drivers/librte_bus_vdev.a 00:05:09.942 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:09.942 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:09.942 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:09.942 [211/268] Linking static target drivers/librte_bus_pci.a 00:05:10.201 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:10.201 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:10.201 [214/268] Linking static target lib/librte_cryptodev.a 00:05:10.201 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.201 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.201 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.201 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.460 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.460 [220/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.460 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.460 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.460 [223/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:10.460 [224/268] Linking static target lib/librte_ethdev.a 00:05:10.718 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.718 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.977 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:12.001 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.261 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:12.261 [230/268] Linking static target lib/librte_vhost.a 00:05:14.165 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.362 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.295 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.552 [234/268] Linking target lib/librte_eal.so.24.1 00:05:19.552 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:19.552 [236/268] Linking target lib/librte_meter.so.24.1 00:05:19.552 [237/268] Linking target lib/librte_ring.so.24.1 00:05:19.810 [238/268] Linking target lib/librte_timer.so.24.1 00:05:19.810 [239/268] Linking target lib/librte_dmadev.so.24.1 00:05:19.810 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:19.810 [241/268] Linking target lib/librte_pci.so.24.1 00:05:19.810 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:19.810 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:19.810 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:19.810 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:19.810 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:19.810 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:19.810 [248/268] Linking target lib/librte_mempool.so.24.1 00:05:19.810 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:20.069 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:20.069 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:20.069 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:20.069 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:20.327 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:20.327 [255/268] Linking target lib/librte_net.so.24.1 00:05:20.327 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:05:20.327 [257/268] Linking target lib/librte_reorder.so.24.1 00:05:20.327 [258/268] Linking target lib/librte_compressdev.so.24.1 00:05:20.327 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:20.327 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:20.586 [261/268] Linking target lib/librte_security.so.24.1 00:05:20.586 [262/268] Linking target lib/librte_hash.so.24.1 00:05:20.586 [263/268] Linking target lib/librte_cmdline.so.24.1 00:05:20.586 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:20.586 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:20.586 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:20.586 [267/268] Linking target lib/librte_power.so.24.1 00:05:20.844 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:20.844 INFO: autodetecting backend as ninja 00:05:20.844 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:05:35.801 CC lib/ut/ut.o 00:05:35.801 CC lib/ut_mock/mock.o 00:05:35.801 CC lib/log/log.o 00:05:35.801 CC lib/log/log_flags.o 00:05:35.801 CC lib/log/log_deprecated.o 00:05:35.801 LIB libspdk_ut.a 00:05:35.801 SO libspdk_ut.so.2.0 00:05:35.801 LIB libspdk_log.a 00:05:35.801 LIB libspdk_ut_mock.a 00:05:35.801 SO libspdk_ut_mock.so.6.0 00:05:35.801 SO libspdk_log.so.7.1 00:05:35.801 SYMLINK libspdk_ut.so 00:05:35.801 SYMLINK libspdk_ut_mock.so 00:05:35.801 SYMLINK libspdk_log.so 00:05:35.801 CXX lib/trace_parser/trace.o 00:05:35.801 CC lib/dma/dma.o 00:05:35.801 CC lib/ioat/ioat.o 00:05:35.801 CC lib/util/base64.o 00:05:35.801 CC lib/util/bit_array.o 00:05:35.801 CC lib/util/cpuset.o 00:05:35.801 CC lib/util/crc16.o 00:05:35.801 CC lib/util/crc32.o 00:05:35.801 CC lib/util/crc32c.o 00:05:35.801 CC lib/util/crc32_ieee.o 00:05:35.801 CC lib/util/crc64.o 00:05:35.801 CC lib/util/fd_group.o 00:05:35.801 CC lib/util/dif.o 00:05:35.801 CC lib/util/fd.o 00:05:35.801 CC lib/util/file.o 00:05:35.801 CC lib/util/hexlify.o 00:05:35.801 CC lib/util/iov.o 00:05:35.801 CC lib/util/math.o 00:05:35.801 CC lib/util/net.o 00:05:35.801 CC lib/util/pipe.o 00:05:35.801 CC lib/util/strerror_tls.o 00:05:35.801 CC lib/util/uuid.o 00:05:35.801 CC lib/util/string.o 00:05:35.801 CC lib/util/xor.o 00:05:35.801 CC lib/util/zipf.o 00:05:35.801 CC lib/util/md5.o 00:05:35.801 CC lib/vfio_user/host/vfio_user_pci.o 00:05:35.801 CC lib/vfio_user/host/vfio_user.o 00:05:35.801 LIB libspdk_dma.a 00:05:35.801 LIB libspdk_ioat.a 00:05:35.801 SO libspdk_dma.so.5.0 00:05:35.801 SO libspdk_ioat.so.7.0 00:05:35.801 SYMLINK libspdk_dma.so 00:05:35.801 SYMLINK libspdk_ioat.so 00:05:35.801 LIB libspdk_vfio_user.a 00:05:35.801 SO libspdk_vfio_user.so.5.0 00:05:35.801 SYMLINK libspdk_vfio_user.so 00:05:35.801 LIB libspdk_util.a 00:05:35.801 SO libspdk_util.so.10.1 00:05:35.801 SYMLINK libspdk_util.so 00:05:35.801 LIB libspdk_trace_parser.a 00:05:35.801 SO libspdk_trace_parser.so.6.0 00:05:35.801 SYMLINK libspdk_trace_parser.so 00:05:35.801 CC lib/idxd/idxd.o 00:05:35.801 CC lib/idxd/idxd_user.o 00:05:35.801 CC lib/idxd/idxd_kernel.o 00:05:35.801 CC lib/vmd/vmd.o 00:05:35.801 CC lib/vmd/led.o 00:05:35.801 CC lib/env_dpdk/env.o 00:05:35.801 CC lib/env_dpdk/memory.o 00:05:35.801 CC lib/env_dpdk/pci.o 00:05:35.801 CC lib/env_dpdk/init.o 00:05:35.801 CC lib/env_dpdk/threads.o 00:05:35.801 CC lib/env_dpdk/pci_virtio.o 00:05:35.801 CC lib/env_dpdk/pci_ioat.o 00:05:35.801 CC lib/env_dpdk/pci_vmd.o 00:05:35.801 CC lib/json/json_util.o 00:05:35.801 CC lib/json/json_parse.o 00:05:35.801 CC lib/json/json_write.o 00:05:35.801 CC lib/env_dpdk/pci_idxd.o 00:05:35.801 CC lib/env_dpdk/pci_event.o 00:05:35.801 CC lib/env_dpdk/sigbus_handler.o 00:05:35.801 CC lib/rdma_utils/rdma_utils.o 00:05:35.801 CC lib/conf/conf.o 00:05:35.801 CC lib/env_dpdk/pci_dpdk.o 00:05:35.801 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:35.801 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:35.801 LIB libspdk_conf.a 00:05:35.801 SO libspdk_conf.so.6.0 00:05:35.801 LIB libspdk_rdma_utils.a 00:05:35.801 LIB libspdk_json.a 00:05:35.801 SO libspdk_rdma_utils.so.1.0 00:05:35.801 SYMLINK libspdk_conf.so 00:05:35.801 SO libspdk_json.so.6.0 00:05:35.801 LIB libspdk_idxd.a 00:05:36.060 SYMLINK libspdk_rdma_utils.so 00:05:36.060 SO libspdk_idxd.so.12.1 00:05:36.060 SYMLINK libspdk_json.so 00:05:36.060 SYMLINK libspdk_idxd.so 00:05:36.060 LIB libspdk_vmd.a 00:05:36.320 SO libspdk_vmd.so.6.0 00:05:36.320 CC lib/rdma_provider/common.o 00:05:36.320 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:36.320 CC lib/jsonrpc/jsonrpc_server.o 00:05:36.320 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:36.320 CC lib/jsonrpc/jsonrpc_client.o 00:05:36.320 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:36.320 SYMLINK libspdk_vmd.so 00:05:36.579 LIB libspdk_env_dpdk.a 00:05:36.579 LIB libspdk_rdma_provider.a 00:05:36.579 SO libspdk_env_dpdk.so.15.1 00:05:36.579 SO libspdk_rdma_provider.so.7.0 00:05:36.579 LIB libspdk_jsonrpc.a 00:05:36.579 SO libspdk_jsonrpc.so.6.0 00:05:36.579 SYMLINK libspdk_rdma_provider.so 00:05:36.579 SYMLINK libspdk_env_dpdk.so 00:05:36.579 SYMLINK libspdk_jsonrpc.so 00:05:37.148 CC lib/rpc/rpc.o 00:05:37.148 LIB libspdk_rpc.a 00:05:37.148 SO libspdk_rpc.so.6.0 00:05:37.407 SYMLINK libspdk_rpc.so 00:05:37.665 CC lib/keyring/keyring.o 00:05:37.665 CC lib/keyring/keyring_rpc.o 00:05:37.665 CC lib/trace/trace.o 00:05:37.665 CC lib/trace/trace_flags.o 00:05:37.665 CC lib/trace/trace_rpc.o 00:05:37.665 CC lib/notify/notify.o 00:05:37.665 CC lib/notify/notify_rpc.o 00:05:37.925 LIB libspdk_notify.a 00:05:37.925 SO libspdk_notify.so.6.0 00:05:37.925 LIB libspdk_keyring.a 00:05:37.925 LIB libspdk_trace.a 00:05:37.925 SO libspdk_keyring.so.2.0 00:05:37.925 SYMLINK libspdk_notify.so 00:05:37.925 SO libspdk_trace.so.11.0 00:05:37.925 SYMLINK libspdk_keyring.so 00:05:37.925 SYMLINK libspdk_trace.so 00:05:38.182 CC lib/thread/iobuf.o 00:05:38.182 CC lib/thread/thread.o 00:05:38.182 CC lib/sock/sock.o 00:05:38.182 CC lib/sock/sock_rpc.o 00:05:38.749 LIB libspdk_sock.a 00:05:38.749 SO libspdk_sock.so.10.0 00:05:38.749 SYMLINK libspdk_sock.so 00:05:39.317 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:39.317 CC lib/nvme/nvme_ctrlr.o 00:05:39.317 CC lib/nvme/nvme_fabric.o 00:05:39.317 CC lib/nvme/nvme_ns_cmd.o 00:05:39.317 CC lib/nvme/nvme_ns.o 00:05:39.317 CC lib/nvme/nvme_pcie_common.o 00:05:39.317 CC lib/nvme/nvme_pcie.o 00:05:39.317 CC lib/nvme/nvme_qpair.o 00:05:39.317 CC lib/nvme/nvme.o 00:05:39.317 CC lib/nvme/nvme_quirks.o 00:05:39.317 CC lib/nvme/nvme_transport.o 00:05:39.317 CC lib/nvme/nvme_discovery.o 00:05:39.317 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:39.317 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:39.317 CC lib/nvme/nvme_io_msg.o 00:05:39.317 CC lib/nvme/nvme_tcp.o 00:05:39.317 CC lib/nvme/nvme_zns.o 00:05:39.317 CC lib/nvme/nvme_stubs.o 00:05:39.317 CC lib/nvme/nvme_opal.o 00:05:39.317 CC lib/nvme/nvme_poll_group.o 00:05:39.317 CC lib/nvme/nvme_auth.o 00:05:39.317 CC lib/nvme/nvme_cuse.o 00:05:39.317 CC lib/nvme/nvme_vfio_user.o 00:05:39.317 CC lib/nvme/nvme_rdma.o 00:05:39.884 LIB libspdk_thread.a 00:05:39.884 SO libspdk_thread.so.11.0 00:05:39.884 SYMLINK libspdk_thread.so 00:05:40.142 CC lib/fsdev/fsdev_io.o 00:05:40.142 CC lib/fsdev/fsdev.o 00:05:40.142 CC lib/init/subsystem_rpc.o 00:05:40.142 CC lib/init/json_config.o 00:05:40.142 CC lib/fsdev/fsdev_rpc.o 00:05:40.142 CC lib/init/subsystem.o 00:05:40.142 CC lib/init/rpc.o 00:05:40.142 CC lib/vfu_tgt/tgt_rpc.o 00:05:40.142 CC lib/vfu_tgt/tgt_endpoint.o 00:05:40.142 CC lib/virtio/virtio.o 00:05:40.142 CC lib/virtio/virtio_vhost_user.o 00:05:40.142 CC lib/virtio/virtio_vfio_user.o 00:05:40.142 CC lib/virtio/virtio_pci.o 00:05:40.142 CC lib/blob/blobstore.o 00:05:40.142 CC lib/blob/request.o 00:05:40.142 CC lib/blob/zeroes.o 00:05:40.142 CC lib/blob/blob_bs_dev.o 00:05:40.142 CC lib/accel/accel.o 00:05:40.142 CC lib/accel/accel_sw.o 00:05:40.142 CC lib/accel/accel_rpc.o 00:05:40.400 LIB libspdk_init.a 00:05:40.400 SO libspdk_init.so.6.0 00:05:40.658 LIB libspdk_virtio.a 00:05:40.658 LIB libspdk_vfu_tgt.a 00:05:40.658 SYMLINK libspdk_init.so 00:05:40.658 SO libspdk_vfu_tgt.so.3.0 00:05:40.658 SO libspdk_virtio.so.7.0 00:05:40.658 SYMLINK libspdk_vfu_tgt.so 00:05:40.658 SYMLINK libspdk_virtio.so 00:05:40.917 CC lib/event/app.o 00:05:40.917 CC lib/event/reactor.o 00:05:40.917 CC lib/event/log_rpc.o 00:05:40.917 CC lib/event/app_rpc.o 00:05:40.917 CC lib/event/scheduler_static.o 00:05:40.917 LIB libspdk_fsdev.a 00:05:40.917 SO libspdk_fsdev.so.2.0 00:05:40.917 SYMLINK libspdk_fsdev.so 00:05:41.176 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:41.435 LIB libspdk_event.a 00:05:41.435 LIB libspdk_accel.a 00:05:41.435 SO libspdk_event.so.14.0 00:05:41.435 SO libspdk_accel.so.16.0 00:05:41.435 SYMLINK libspdk_event.so 00:05:41.435 SYMLINK libspdk_accel.so 00:05:41.435 LIB libspdk_nvme.a 00:05:41.693 SO libspdk_nvme.so.15.0 00:05:41.693 CC lib/bdev/bdev.o 00:05:41.693 CC lib/bdev/bdev_rpc.o 00:05:41.693 CC lib/bdev/bdev_zone.o 00:05:41.693 CC lib/bdev/part.o 00:05:41.693 CC lib/bdev/scsi_nvme.o 00:05:41.951 SYMLINK libspdk_nvme.so 00:05:41.951 LIB libspdk_fuse_dispatcher.a 00:05:41.951 SO libspdk_fuse_dispatcher.so.1.0 00:05:41.951 SYMLINK libspdk_fuse_dispatcher.so 00:05:42.211 LIB libspdk_blob.a 00:05:42.469 SO libspdk_blob.so.11.0 00:05:42.469 SYMLINK libspdk_blob.so 00:05:42.728 CC lib/lvol/lvol.o 00:05:42.728 CC lib/blobfs/blobfs.o 00:05:42.728 CC lib/blobfs/tree.o 00:05:43.666 LIB libspdk_blobfs.a 00:05:43.666 SO libspdk_blobfs.so.10.0 00:05:43.666 LIB libspdk_lvol.a 00:05:43.666 SYMLINK libspdk_blobfs.so 00:05:43.666 SO libspdk_lvol.so.10.0 00:05:43.924 SYMLINK libspdk_lvol.so 00:05:44.491 LIB libspdk_bdev.a 00:05:44.491 SO libspdk_bdev.so.17.0 00:05:44.749 SYMLINK libspdk_bdev.so 00:05:45.009 CC lib/ublk/ublk.o 00:05:45.009 CC lib/ublk/ublk_rpc.o 00:05:45.009 CC lib/ftl/ftl_core.o 00:05:45.009 CC lib/nvmf/ctrlr.o 00:05:45.009 CC lib/nvmf/ctrlr_discovery.o 00:05:45.010 CC lib/ftl/ftl_init.o 00:05:45.010 CC lib/nvmf/ctrlr_bdev.o 00:05:45.010 CC lib/ftl/ftl_layout.o 00:05:45.010 CC lib/nvmf/subsystem.o 00:05:45.010 CC lib/ftl/ftl_debug.o 00:05:45.010 CC lib/nvmf/nvmf.o 00:05:45.010 CC lib/nvmf/nvmf_rpc.o 00:05:45.010 CC lib/ftl/ftl_io.o 00:05:45.010 CC lib/ftl/ftl_sb.o 00:05:45.010 CC lib/nvmf/transport.o 00:05:45.010 CC lib/ftl/ftl_l2p.o 00:05:45.010 CC lib/nvmf/mdns_server.o 00:05:45.010 CC lib/ftl/ftl_l2p_flat.o 00:05:45.010 CC lib/nvmf/tcp.o 00:05:45.010 CC lib/ftl/ftl_band_ops.o 00:05:45.010 CC lib/nvmf/stubs.o 00:05:45.010 CC lib/ftl/ftl_nv_cache.o 00:05:45.010 CC lib/ftl/ftl_band.o 00:05:45.010 CC lib/ftl/ftl_writer.o 00:05:45.010 CC lib/nvmf/vfio_user.o 00:05:45.010 CC lib/scsi/lun.o 00:05:45.010 CC lib/scsi/dev.o 00:05:45.010 CC lib/ftl/ftl_rq.o 00:05:45.010 CC lib/nvmf/rdma.o 00:05:45.010 CC lib/scsi/scsi.o 00:05:45.010 CC lib/nvmf/auth.o 00:05:45.010 CC lib/ftl/ftl_reloc.o 00:05:45.010 CC lib/scsi/port.o 00:05:45.010 CC lib/ftl/ftl_l2p_cache.o 00:05:45.010 CC lib/nbd/nbd.o 00:05:45.010 CC lib/scsi/scsi_bdev.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:45.010 CC lib/scsi/scsi_pr.o 00:05:45.010 CC lib/ftl/ftl_p2l_log.o 00:05:45.010 CC lib/ftl/ftl_p2l.o 00:05:45.010 CC lib/nbd/nbd_rpc.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt.o 00:05:45.010 CC lib/scsi/scsi_rpc.o 00:05:45.010 CC lib/scsi/task.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:45.010 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:45.010 CC lib/ftl/utils/ftl_conf.o 00:05:45.010 CC lib/ftl/utils/ftl_property.o 00:05:45.010 CC lib/ftl/utils/ftl_md.o 00:05:45.010 CC lib/ftl/utils/ftl_mempool.o 00:05:45.010 CC lib/ftl/utils/ftl_bitmap.o 00:05:45.010 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:45.010 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:45.010 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:45.010 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:45.010 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:45.010 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:45.010 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:45.010 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:45.010 CC lib/ftl/base/ftl_base_dev.o 00:05:45.010 CC lib/ftl/ftl_trace.o 00:05:45.010 CC lib/ftl/base/ftl_base_bdev.o 00:05:45.576 LIB libspdk_scsi.a 00:05:45.576 SO libspdk_scsi.so.9.0 00:05:45.835 SYMLINK libspdk_scsi.so 00:05:45.835 LIB libspdk_nbd.a 00:05:45.835 SO libspdk_nbd.so.7.0 00:05:45.835 SYMLINK libspdk_nbd.so 00:05:45.835 LIB libspdk_ublk.a 00:05:45.835 SO libspdk_ublk.so.3.0 00:05:46.093 SYMLINK libspdk_ublk.so 00:05:46.093 CC lib/vhost/vhost.o 00:05:46.093 CC lib/vhost/vhost_rpc.o 00:05:46.093 CC lib/vhost/vhost_scsi.o 00:05:46.093 CC lib/vhost/vhost_blk.o 00:05:46.093 CC lib/vhost/rte_vhost_user.o 00:05:46.093 CC lib/iscsi/conn.o 00:05:46.093 CC lib/iscsi/portal_grp.o 00:05:46.093 CC lib/iscsi/init_grp.o 00:05:46.093 CC lib/iscsi/iscsi.o 00:05:46.093 CC lib/iscsi/param.o 00:05:46.093 CC lib/iscsi/tgt_node.o 00:05:46.093 CC lib/iscsi/iscsi_subsystem.o 00:05:46.093 CC lib/iscsi/iscsi_rpc.o 00:05:46.093 CC lib/iscsi/task.o 00:05:46.352 LIB libspdk_ftl.a 00:05:46.612 SO libspdk_ftl.so.9.0 00:05:46.871 SYMLINK libspdk_ftl.so 00:05:47.130 LIB libspdk_iscsi.a 00:05:47.130 LIB libspdk_vhost.a 00:05:47.130 SO libspdk_iscsi.so.8.0 00:05:47.130 SO libspdk_vhost.so.8.0 00:05:47.130 SYMLINK libspdk_vhost.so 00:05:47.130 SYMLINK libspdk_iscsi.so 00:05:47.389 LIB libspdk_nvmf.a 00:05:47.648 SO libspdk_nvmf.so.20.0 00:05:47.907 SYMLINK libspdk_nvmf.so 00:05:48.164 CC module/env_dpdk/env_dpdk_rpc.o 00:05:48.164 CC module/vfu_device/vfu_virtio_scsi.o 00:05:48.164 CC module/vfu_device/vfu_virtio.o 00:05:48.164 CC module/vfu_device/vfu_virtio_blk.o 00:05:48.164 CC module/vfu_device/vfu_virtio_rpc.o 00:05:48.164 CC module/vfu_device/vfu_virtio_fs.o 00:05:48.423 CC module/blob/bdev/blob_bdev.o 00:05:48.423 CC module/accel/dsa/accel_dsa.o 00:05:48.423 CC module/accel/dsa/accel_dsa_rpc.o 00:05:48.423 CC module/keyring/file/keyring.o 00:05:48.423 CC module/keyring/file/keyring_rpc.o 00:05:48.423 CC module/keyring/linux/keyring.o 00:05:48.423 CC module/accel/iaa/accel_iaa.o 00:05:48.423 CC module/keyring/linux/keyring_rpc.o 00:05:48.423 CC module/accel/ioat/accel_ioat.o 00:05:48.423 CC module/accel/iaa/accel_iaa_rpc.o 00:05:48.423 CC module/accel/error/accel_error_rpc.o 00:05:48.423 CC module/accel/error/accel_error.o 00:05:48.423 CC module/accel/ioat/accel_ioat_rpc.o 00:05:48.423 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:48.423 LIB libspdk_env_dpdk_rpc.a 00:05:48.423 CC module/fsdev/aio/fsdev_aio.o 00:05:48.423 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:48.423 CC module/sock/posix/posix.o 00:05:48.423 CC module/fsdev/aio/linux_aio_mgr.o 00:05:48.423 CC module/scheduler/gscheduler/gscheduler.o 00:05:48.423 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:48.423 SO libspdk_env_dpdk_rpc.so.6.0 00:05:48.423 SYMLINK libspdk_env_dpdk_rpc.so 00:05:48.681 LIB libspdk_accel_iaa.a 00:05:48.681 LIB libspdk_keyring_file.a 00:05:48.681 LIB libspdk_keyring_linux.a 00:05:48.681 LIB libspdk_scheduler_gscheduler.a 00:05:48.681 SO libspdk_keyring_file.so.2.0 00:05:48.681 SO libspdk_accel_iaa.so.3.0 00:05:48.681 SO libspdk_keyring_linux.so.1.0 00:05:48.681 LIB libspdk_scheduler_dpdk_governor.a 00:05:48.681 LIB libspdk_accel_ioat.a 00:05:48.681 SO libspdk_scheduler_gscheduler.so.4.0 00:05:48.681 LIB libspdk_accel_error.a 00:05:48.681 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:48.681 SO libspdk_accel_ioat.so.6.0 00:05:48.681 SYMLINK libspdk_keyring_linux.so 00:05:48.681 LIB libspdk_scheduler_dynamic.a 00:05:48.681 SO libspdk_accel_error.so.2.0 00:05:48.681 SYMLINK libspdk_keyring_file.so 00:05:48.681 SYMLINK libspdk_accel_iaa.so 00:05:48.681 LIB libspdk_blob_bdev.a 00:05:48.681 SYMLINK libspdk_scheduler_gscheduler.so 00:05:48.681 SO libspdk_scheduler_dynamic.so.4.0 00:05:48.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:48.681 SO libspdk_blob_bdev.so.11.0 00:05:48.682 SYMLINK libspdk_accel_ioat.so 00:05:48.682 LIB libspdk_accel_dsa.a 00:05:48.682 SYMLINK libspdk_accel_error.so 00:05:48.940 SYMLINK libspdk_scheduler_dynamic.so 00:05:48.940 SO libspdk_accel_dsa.so.5.0 00:05:48.940 SYMLINK libspdk_blob_bdev.so 00:05:48.940 SYMLINK libspdk_accel_dsa.so 00:05:48.940 LIB libspdk_vfu_device.a 00:05:48.940 LIB libspdk_fsdev_aio.a 00:05:48.940 SO libspdk_vfu_device.so.3.0 00:05:48.940 SO libspdk_fsdev_aio.so.1.0 00:05:49.199 SYMLINK libspdk_fsdev_aio.so 00:05:49.199 SYMLINK libspdk_vfu_device.so 00:05:49.199 LIB libspdk_sock_posix.a 00:05:49.200 SO libspdk_sock_posix.so.6.0 00:05:49.200 CC module/bdev/malloc/bdev_malloc.o 00:05:49.200 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:49.200 CC module/bdev/iscsi/bdev_iscsi.o 00:05:49.200 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:49.200 CC module/bdev/delay/vbdev_delay.o 00:05:49.200 CC module/bdev/ftl/bdev_ftl.o 00:05:49.200 CC module/bdev/gpt/gpt.o 00:05:49.200 CC module/bdev/error/vbdev_error.o 00:05:49.200 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:49.200 CC module/bdev/gpt/vbdev_gpt.o 00:05:49.200 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:49.200 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:49.200 CC module/bdev/error/vbdev_error_rpc.o 00:05:49.200 CC module/bdev/aio/bdev_aio.o 00:05:49.200 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:49.200 CC module/blobfs/bdev/blobfs_bdev.o 00:05:49.200 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:49.200 CC module/bdev/null/bdev_null.o 00:05:49.200 CC module/bdev/null/bdev_null_rpc.o 00:05:49.200 CC module/bdev/aio/bdev_aio_rpc.o 00:05:49.200 CC module/bdev/lvol/vbdev_lvol.o 00:05:49.200 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:49.200 CC module/bdev/passthru/vbdev_passthru.o 00:05:49.458 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:49.458 CC module/bdev/raid/bdev_raid.o 00:05:49.458 CC module/bdev/nvme/bdev_nvme.o 00:05:49.458 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:49.458 CC module/bdev/raid/bdev_raid_rpc.o 00:05:49.458 CC module/bdev/nvme/nvme_rpc.o 00:05:49.458 CC module/bdev/raid/bdev_raid_sb.o 00:05:49.458 CC module/bdev/nvme/vbdev_opal.o 00:05:49.458 CC module/bdev/nvme/bdev_mdns_client.o 00:05:49.458 CC module/bdev/raid/raid0.o 00:05:49.458 CC module/bdev/raid/raid1.o 00:05:49.458 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:49.458 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:49.458 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:49.458 CC module/bdev/split/vbdev_split.o 00:05:49.458 CC module/bdev/raid/concat.o 00:05:49.458 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:49.458 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:49.458 CC module/bdev/split/vbdev_split_rpc.o 00:05:49.459 SYMLINK libspdk_sock_posix.so 00:05:49.717 LIB libspdk_blobfs_bdev.a 00:05:49.717 LIB libspdk_bdev_gpt.a 00:05:49.717 SO libspdk_blobfs_bdev.so.6.0 00:05:49.717 SO libspdk_bdev_gpt.so.6.0 00:05:49.717 LIB libspdk_bdev_zone_block.a 00:05:49.717 LIB libspdk_bdev_malloc.a 00:05:49.717 LIB libspdk_bdev_ftl.a 00:05:49.717 LIB libspdk_bdev_split.a 00:05:49.717 LIB libspdk_bdev_null.a 00:05:49.717 SO libspdk_bdev_zone_block.so.6.0 00:05:49.717 SYMLINK libspdk_blobfs_bdev.so 00:05:49.717 SO libspdk_bdev_malloc.so.6.0 00:05:49.717 SO libspdk_bdev_split.so.6.0 00:05:49.717 SO libspdk_bdev_ftl.so.6.0 00:05:49.717 LIB libspdk_bdev_error.a 00:05:49.717 SO libspdk_bdev_null.so.6.0 00:05:49.717 SYMLINK libspdk_bdev_gpt.so 00:05:49.717 SO libspdk_bdev_error.so.6.0 00:05:49.717 SYMLINK libspdk_bdev_zone_block.so 00:05:49.717 SYMLINK libspdk_bdev_malloc.so 00:05:49.717 SYMLINK libspdk_bdev_split.so 00:05:49.717 LIB libspdk_bdev_passthru.a 00:05:49.717 SYMLINK libspdk_bdev_ftl.so 00:05:49.717 LIB libspdk_bdev_aio.a 00:05:49.717 SYMLINK libspdk_bdev_null.so 00:05:49.717 LIB libspdk_bdev_iscsi.a 00:05:49.717 SO libspdk_bdev_aio.so.6.0 00:05:49.717 SO libspdk_bdev_passthru.so.6.0 00:05:49.717 SO libspdk_bdev_iscsi.so.6.0 00:05:49.717 SYMLINK libspdk_bdev_error.so 00:05:49.717 LIB libspdk_bdev_delay.a 00:05:49.976 SO libspdk_bdev_delay.so.6.0 00:05:49.976 SYMLINK libspdk_bdev_aio.so 00:05:49.976 SYMLINK libspdk_bdev_passthru.so 00:05:49.976 SYMLINK libspdk_bdev_iscsi.so 00:05:49.976 SYMLINK libspdk_bdev_delay.so 00:05:49.976 LIB libspdk_bdev_virtio.a 00:05:49.976 LIB libspdk_bdev_lvol.a 00:05:49.976 SO libspdk_bdev_virtio.so.6.0 00:05:49.976 SO libspdk_bdev_lvol.so.6.0 00:05:49.976 SYMLINK libspdk_bdev_lvol.so 00:05:49.976 SYMLINK libspdk_bdev_virtio.so 00:05:50.544 LIB libspdk_bdev_raid.a 00:05:50.544 SO libspdk_bdev_raid.so.6.0 00:05:50.544 SYMLINK libspdk_bdev_raid.so 00:05:51.922 LIB libspdk_bdev_nvme.a 00:05:51.922 SO libspdk_bdev_nvme.so.7.1 00:05:52.182 SYMLINK libspdk_bdev_nvme.so 00:05:52.750 CC module/event/subsystems/fsdev/fsdev.o 00:05:52.750 CC module/event/subsystems/scheduler/scheduler.o 00:05:52.750 CC module/event/subsystems/vmd/vmd.o 00:05:52.750 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:52.750 CC module/event/subsystems/sock/sock.o 00:05:52.750 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:52.750 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:52.750 CC module/event/subsystems/iobuf/iobuf.o 00:05:52.750 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:52.750 CC module/event/subsystems/keyring/keyring.o 00:05:53.009 LIB libspdk_event_vhost_blk.a 00:05:53.009 LIB libspdk_event_fsdev.a 00:05:53.009 SO libspdk_event_vhost_blk.so.3.0 00:05:53.009 LIB libspdk_event_keyring.a 00:05:53.009 LIB libspdk_event_iobuf.a 00:05:53.009 LIB libspdk_event_sock.a 00:05:53.009 LIB libspdk_event_scheduler.a 00:05:53.009 SO libspdk_event_fsdev.so.1.0 00:05:53.009 LIB libspdk_event_vmd.a 00:05:53.009 LIB libspdk_event_vfu_tgt.a 00:05:53.009 SO libspdk_event_sock.so.5.0 00:05:53.009 SO libspdk_event_keyring.so.1.0 00:05:53.009 SYMLINK libspdk_event_vhost_blk.so 00:05:53.009 SO libspdk_event_scheduler.so.4.0 00:05:53.009 SO libspdk_event_iobuf.so.3.0 00:05:53.009 SO libspdk_event_vfu_tgt.so.3.0 00:05:53.009 SO libspdk_event_vmd.so.6.0 00:05:53.009 SYMLINK libspdk_event_fsdev.so 00:05:53.009 SYMLINK libspdk_event_keyring.so 00:05:53.009 SYMLINK libspdk_event_sock.so 00:05:53.009 SYMLINK libspdk_event_scheduler.so 00:05:53.009 SYMLINK libspdk_event_iobuf.so 00:05:53.009 SYMLINK libspdk_event_vfu_tgt.so 00:05:53.009 SYMLINK libspdk_event_vmd.so 00:05:53.268 CC module/event/subsystems/accel/accel.o 00:05:53.527 LIB libspdk_event_accel.a 00:05:53.527 SO libspdk_event_accel.so.6.0 00:05:53.527 SYMLINK libspdk_event_accel.so 00:05:53.785 CC module/event/subsystems/bdev/bdev.o 00:05:54.044 LIB libspdk_event_bdev.a 00:05:54.044 SO libspdk_event_bdev.so.6.0 00:05:54.044 SYMLINK libspdk_event_bdev.so 00:05:54.302 CC module/event/subsystems/ublk/ublk.o 00:05:54.561 CC module/event/subsystems/scsi/scsi.o 00:05:54.561 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:54.561 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:54.561 CC module/event/subsystems/nbd/nbd.o 00:05:54.561 LIB libspdk_event_ublk.a 00:05:54.561 SO libspdk_event_ublk.so.3.0 00:05:54.561 LIB libspdk_event_nbd.a 00:05:54.561 LIB libspdk_event_scsi.a 00:05:54.561 SYMLINK libspdk_event_ublk.so 00:05:54.561 SO libspdk_event_nbd.so.6.0 00:05:54.820 SO libspdk_event_scsi.so.6.0 00:05:54.820 LIB libspdk_event_nvmf.a 00:05:54.820 SYMLINK libspdk_event_nbd.so 00:05:54.820 SYMLINK libspdk_event_scsi.so 00:05:54.820 SO libspdk_event_nvmf.so.6.0 00:05:54.820 SYMLINK libspdk_event_nvmf.so 00:05:55.078 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:55.078 CC module/event/subsystems/iscsi/iscsi.o 00:05:55.337 LIB libspdk_event_iscsi.a 00:05:55.337 LIB libspdk_event_vhost_scsi.a 00:05:55.337 SO libspdk_event_iscsi.so.6.0 00:05:55.337 SO libspdk_event_vhost_scsi.so.3.0 00:05:55.337 SYMLINK libspdk_event_iscsi.so 00:05:55.337 SYMLINK libspdk_event_vhost_scsi.so 00:05:55.595 SO libspdk.so.6.0 00:05:55.595 SYMLINK libspdk.so 00:05:55.855 CC app/spdk_nvme_discover/discovery_aer.o 00:05:55.855 CXX app/trace/trace.o 00:05:55.855 CC app/spdk_nvme_identify/identify.o 00:05:55.855 CC app/spdk_lspci/spdk_lspci.o 00:05:55.855 CC app/spdk_top/spdk_top.o 00:05:55.855 CC app/trace_record/trace_record.o 00:05:55.855 CC app/spdk_nvme_perf/perf.o 00:05:55.855 CC app/iscsi_tgt/iscsi_tgt.o 00:05:55.855 CC test/rpc_client/rpc_client_test.o 00:05:55.855 TEST_HEADER include/spdk/assert.h 00:05:55.855 TEST_HEADER include/spdk/accel_module.h 00:05:55.855 TEST_HEADER include/spdk/accel.h 00:05:55.855 TEST_HEADER include/spdk/barrier.h 00:05:55.855 TEST_HEADER include/spdk/base64.h 00:05:55.855 TEST_HEADER include/spdk/bdev_module.h 00:05:55.855 TEST_HEADER include/spdk/bdev.h 00:05:55.855 TEST_HEADER include/spdk/bdev_zone.h 00:05:55.855 TEST_HEADER include/spdk/blob_bdev.h 00:05:55.855 TEST_HEADER include/spdk/bit_array.h 00:05:55.855 TEST_HEADER include/spdk/bit_pool.h 00:05:55.855 TEST_HEADER include/spdk/blobfs.h 00:05:55.855 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:55.855 CC app/spdk_dd/spdk_dd.o 00:05:55.855 TEST_HEADER include/spdk/blob.h 00:05:55.855 TEST_HEADER include/spdk/conf.h 00:05:55.855 TEST_HEADER include/spdk/config.h 00:05:55.855 TEST_HEADER include/spdk/cpuset.h 00:05:55.855 TEST_HEADER include/spdk/crc16.h 00:05:55.855 TEST_HEADER include/spdk/crc32.h 00:05:55.855 TEST_HEADER include/spdk/crc64.h 00:05:55.855 TEST_HEADER include/spdk/dif.h 00:05:55.855 TEST_HEADER include/spdk/dma.h 00:05:55.855 TEST_HEADER include/spdk/env_dpdk.h 00:05:55.855 TEST_HEADER include/spdk/endian.h 00:05:55.855 TEST_HEADER include/spdk/env.h 00:05:55.855 TEST_HEADER include/spdk/event.h 00:05:55.855 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:55.855 TEST_HEADER include/spdk/fd_group.h 00:05:55.855 TEST_HEADER include/spdk/fd.h 00:05:55.855 TEST_HEADER include/spdk/fsdev.h 00:05:55.855 TEST_HEADER include/spdk/file.h 00:05:55.855 TEST_HEADER include/spdk/fsdev_module.h 00:05:55.855 TEST_HEADER include/spdk/ftl.h 00:05:55.855 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:55.855 TEST_HEADER include/spdk/gpt_spec.h 00:05:55.855 TEST_HEADER include/spdk/hexlify.h 00:05:55.855 TEST_HEADER include/spdk/histogram_data.h 00:05:55.855 TEST_HEADER include/spdk/idxd_spec.h 00:05:55.855 TEST_HEADER include/spdk/idxd.h 00:05:55.855 TEST_HEADER include/spdk/init.h 00:05:55.855 CC app/nvmf_tgt/nvmf_main.o 00:05:55.855 TEST_HEADER include/spdk/iscsi_spec.h 00:05:55.855 TEST_HEADER include/spdk/ioat.h 00:05:55.855 TEST_HEADER include/spdk/ioat_spec.h 00:05:55.855 TEST_HEADER include/spdk/keyring.h 00:05:55.855 TEST_HEADER include/spdk/json.h 00:05:55.855 TEST_HEADER include/spdk/jsonrpc.h 00:05:55.855 TEST_HEADER include/spdk/log.h 00:05:55.855 TEST_HEADER include/spdk/likely.h 00:05:55.855 TEST_HEADER include/spdk/keyring_module.h 00:05:55.855 TEST_HEADER include/spdk/lvol.h 00:05:55.855 TEST_HEADER include/spdk/md5.h 00:05:55.855 TEST_HEADER include/spdk/nbd.h 00:05:55.855 TEST_HEADER include/spdk/mmio.h 00:05:55.855 TEST_HEADER include/spdk/net.h 00:05:55.855 TEST_HEADER include/spdk/memory.h 00:05:55.855 TEST_HEADER include/spdk/notify.h 00:05:55.855 TEST_HEADER include/spdk/nvme.h 00:05:55.855 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:55.855 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:55.855 TEST_HEADER include/spdk/nvme_intel.h 00:05:55.855 TEST_HEADER include/spdk/nvme_zns.h 00:05:55.855 TEST_HEADER include/spdk/nvme_spec.h 00:05:55.855 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:55.855 TEST_HEADER include/spdk/nvmf_spec.h 00:05:55.855 TEST_HEADER include/spdk/nvmf.h 00:05:55.855 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:55.855 TEST_HEADER include/spdk/nvmf_transport.h 00:05:55.855 TEST_HEADER include/spdk/pci_ids.h 00:05:55.855 TEST_HEADER include/spdk/opal.h 00:05:55.855 TEST_HEADER include/spdk/opal_spec.h 00:05:55.855 TEST_HEADER include/spdk/pipe.h 00:05:55.855 TEST_HEADER include/spdk/reduce.h 00:05:55.855 TEST_HEADER include/spdk/queue.h 00:05:55.855 TEST_HEADER include/spdk/rpc.h 00:05:55.855 TEST_HEADER include/spdk/scheduler.h 00:05:55.855 TEST_HEADER include/spdk/scsi.h 00:05:55.855 TEST_HEADER include/spdk/sock.h 00:05:55.855 TEST_HEADER include/spdk/scsi_spec.h 00:05:55.855 CC app/spdk_tgt/spdk_tgt.o 00:05:55.855 TEST_HEADER include/spdk/stdinc.h 00:05:55.855 TEST_HEADER include/spdk/string.h 00:05:55.855 TEST_HEADER include/spdk/trace.h 00:05:55.855 TEST_HEADER include/spdk/thread.h 00:05:55.855 TEST_HEADER include/spdk/tree.h 00:05:55.855 TEST_HEADER include/spdk/trace_parser.h 00:05:55.855 TEST_HEADER include/spdk/ublk.h 00:05:55.855 TEST_HEADER include/spdk/uuid.h 00:05:55.855 TEST_HEADER include/spdk/util.h 00:05:55.855 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:55.855 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:55.855 TEST_HEADER include/spdk/version.h 00:05:55.855 TEST_HEADER include/spdk/vhost.h 00:05:55.855 TEST_HEADER include/spdk/vmd.h 00:05:55.855 TEST_HEADER include/spdk/zipf.h 00:05:55.855 TEST_HEADER include/spdk/xor.h 00:05:55.855 CXX test/cpp_headers/accel.o 00:05:55.855 CXX test/cpp_headers/assert.o 00:05:55.855 CXX test/cpp_headers/accel_module.o 00:05:55.855 CXX test/cpp_headers/barrier.o 00:05:55.855 CXX test/cpp_headers/base64.o 00:05:55.855 CXX test/cpp_headers/bdev_zone.o 00:05:55.855 CXX test/cpp_headers/bdev_module.o 00:05:55.855 CXX test/cpp_headers/bdev.o 00:05:55.855 CXX test/cpp_headers/blob_bdev.o 00:05:55.855 CXX test/cpp_headers/bit_array.o 00:05:55.855 CXX test/cpp_headers/bit_pool.o 00:05:55.855 CXX test/cpp_headers/blobfs.o 00:05:55.855 CXX test/cpp_headers/blobfs_bdev.o 00:05:55.855 CXX test/cpp_headers/blob.o 00:05:55.855 CXX test/cpp_headers/config.o 00:05:55.855 CXX test/cpp_headers/conf.o 00:05:55.855 CXX test/cpp_headers/cpuset.o 00:05:55.855 CXX test/cpp_headers/crc32.o 00:05:55.855 CXX test/cpp_headers/crc16.o 00:05:55.855 CXX test/cpp_headers/crc64.o 00:05:55.855 CXX test/cpp_headers/dif.o 00:05:55.855 CXX test/cpp_headers/endian.o 00:05:55.855 CXX test/cpp_headers/env.o 00:05:55.855 CXX test/cpp_headers/env_dpdk.o 00:05:55.855 CXX test/cpp_headers/dma.o 00:05:55.855 CXX test/cpp_headers/event.o 00:05:55.855 CXX test/cpp_headers/file.o 00:05:55.855 CXX test/cpp_headers/fd_group.o 00:05:55.855 CXX test/cpp_headers/fd.o 00:05:55.855 CXX test/cpp_headers/fsdev.o 00:05:55.855 CXX test/cpp_headers/fsdev_module.o 00:05:55.855 CXX test/cpp_headers/ftl.o 00:05:55.855 CXX test/cpp_headers/hexlify.o 00:05:55.855 CXX test/cpp_headers/fuse_dispatcher.o 00:05:55.855 CXX test/cpp_headers/idxd_spec.o 00:05:55.855 CXX test/cpp_headers/idxd.o 00:05:55.855 CXX test/cpp_headers/histogram_data.o 00:05:55.855 CXX test/cpp_headers/gpt_spec.o 00:05:56.126 CXX test/cpp_headers/init.o 00:05:56.126 CXX test/cpp_headers/ioat_spec.o 00:05:56.126 CXX test/cpp_headers/ioat.o 00:05:56.126 CXX test/cpp_headers/iscsi_spec.o 00:05:56.126 CXX test/cpp_headers/json.o 00:05:56.126 CXX test/cpp_headers/jsonrpc.o 00:05:56.126 CXX test/cpp_headers/keyring.o 00:05:56.126 CXX test/cpp_headers/keyring_module.o 00:05:56.126 CXX test/cpp_headers/log.o 00:05:56.126 CXX test/cpp_headers/likely.o 00:05:56.126 CXX test/cpp_headers/lvol.o 00:05:56.126 CXX test/cpp_headers/memory.o 00:05:56.126 CXX test/cpp_headers/md5.o 00:05:56.126 CXX test/cpp_headers/mmio.o 00:05:56.126 CXX test/cpp_headers/nbd.o 00:05:56.126 CXX test/cpp_headers/net.o 00:05:56.126 CXX test/cpp_headers/notify.o 00:05:56.126 CXX test/cpp_headers/nvme.o 00:05:56.126 CXX test/cpp_headers/nvme_intel.o 00:05:56.126 CXX test/cpp_headers/nvme_ocssd.o 00:05:56.126 CXX test/cpp_headers/nvme_spec.o 00:05:56.126 CXX test/cpp_headers/nvme_zns.o 00:05:56.126 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:56.126 CXX test/cpp_headers/nvmf_cmd.o 00:05:56.126 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:56.126 CXX test/cpp_headers/nvmf.o 00:05:56.126 CXX test/cpp_headers/nvmf_spec.o 00:05:56.126 CXX test/cpp_headers/nvmf_transport.o 00:05:56.126 CXX test/cpp_headers/opal.o 00:05:56.126 CXX test/cpp_headers/opal_spec.o 00:05:56.126 CXX test/cpp_headers/pci_ids.o 00:05:56.126 CXX test/cpp_headers/queue.o 00:05:56.126 CXX test/cpp_headers/pipe.o 00:05:56.126 CXX test/cpp_headers/reduce.o 00:05:56.126 CXX test/cpp_headers/scheduler.o 00:05:56.126 CXX test/cpp_headers/rpc.o 00:05:56.126 CC examples/ioat/perf/perf.o 00:05:56.126 CC examples/util/zipf/zipf.o 00:05:56.126 CXX test/cpp_headers/scsi.o 00:05:56.126 CXX test/cpp_headers/sock.o 00:05:56.126 CXX test/cpp_headers/stdinc.o 00:05:56.126 CXX test/cpp_headers/scsi_spec.o 00:05:56.126 CXX test/cpp_headers/string.o 00:05:56.126 CXX test/cpp_headers/thread.o 00:05:56.126 CXX test/cpp_headers/trace.o 00:05:56.126 CXX test/cpp_headers/tree.o 00:05:56.126 CXX test/cpp_headers/trace_parser.o 00:05:56.126 CC test/app/stub/stub.o 00:05:56.126 CC test/env/memory/memory_ut.o 00:05:56.126 CC test/thread/poller_perf/poller_perf.o 00:05:56.126 CC test/app/histogram_perf/histogram_perf.o 00:05:56.126 CC test/app/jsoncat/jsoncat.o 00:05:56.126 CC test/env/vtophys/vtophys.o 00:05:56.126 CC app/fio/bdev/fio_plugin.o 00:05:56.126 CC examples/ioat/verify/verify.o 00:05:56.126 CC test/env/pci/pci_ut.o 00:05:56.126 CC app/fio/nvme/fio_plugin.o 00:05:56.126 CC test/dma/test_dma/test_dma.o 00:05:56.126 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:56.126 LINK spdk_lspci 00:05:56.126 CC test/app/bdev_svc/bdev_svc.o 00:05:56.126 CXX test/cpp_headers/ublk.o 00:05:56.403 LINK spdk_nvme_discover 00:05:56.669 LINK rpc_client_test 00:05:56.669 LINK nvmf_tgt 00:05:56.669 LINK iscsi_tgt 00:05:56.928 CXX test/cpp_headers/util.o 00:05:56.928 CXX test/cpp_headers/uuid.o 00:05:56.928 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:56.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:56.928 CC test/env/mem_callbacks/mem_callbacks.o 00:05:56.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:56.928 LINK interrupt_tgt 00:05:56.928 LINK vtophys 00:05:56.928 CXX test/cpp_headers/version.o 00:05:56.928 LINK jsoncat 00:05:56.928 CXX test/cpp_headers/vfio_user_pci.o 00:05:56.928 CXX test/cpp_headers/vfio_user_spec.o 00:05:56.928 CXX test/cpp_headers/vhost.o 00:05:56.928 CXX test/cpp_headers/vmd.o 00:05:56.928 CXX test/cpp_headers/xor.o 00:05:56.928 CXX test/cpp_headers/zipf.o 00:05:56.928 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:56.928 LINK zipf 00:05:56.928 LINK spdk_trace_record 00:05:56.928 LINK histogram_perf 00:05:56.928 LINK poller_perf 00:05:56.928 LINK verify 00:05:56.928 LINK stub 00:05:56.928 LINK env_dpdk_post_init 00:05:56.928 LINK spdk_tgt 00:05:56.928 LINK spdk_trace 00:05:57.186 LINK bdev_svc 00:05:57.186 LINK ioat_perf 00:05:57.186 LINK spdk_dd 00:05:57.186 LINK pci_ut 00:05:57.186 LINK spdk_nvme_perf 00:05:57.443 LINK spdk_bdev 00:05:57.443 LINK test_dma 00:05:57.443 LINK nvme_fuzz 00:05:57.444 LINK spdk_nvme 00:05:57.444 LINK vhost_fuzz 00:05:57.444 CC test/event/event_perf/event_perf.o 00:05:57.444 CC examples/sock/hello_world/hello_sock.o 00:05:57.444 CC examples/idxd/perf/perf.o 00:05:57.444 LINK spdk_nvme_identify 00:05:57.444 CC test/event/reactor_perf/reactor_perf.o 00:05:57.444 CC test/event/reactor/reactor.o 00:05:57.444 LINK mem_callbacks 00:05:57.444 CC examples/vmd/led/led.o 00:05:57.444 CC examples/vmd/lsvmd/lsvmd.o 00:05:57.444 CC app/vhost/vhost.o 00:05:57.444 CC examples/thread/thread/thread_ex.o 00:05:57.444 CC test/event/app_repeat/app_repeat.o 00:05:57.444 CC test/event/scheduler/scheduler.o 00:05:57.444 LINK spdk_top 00:05:57.702 LINK reactor 00:05:57.702 LINK led 00:05:57.702 LINK lsvmd 00:05:57.702 LINK reactor_perf 00:05:57.702 LINK hello_sock 00:05:57.702 LINK event_perf 00:05:57.702 LINK thread 00:05:57.702 LINK vhost 00:05:57.702 LINK app_repeat 00:05:57.702 LINK scheduler 00:05:57.702 LINK idxd_perf 00:05:57.960 CC test/nvme/boot_partition/boot_partition.o 00:05:57.960 CC test/nvme/e2edp/nvme_dp.o 00:05:57.960 CC test/nvme/simple_copy/simple_copy.o 00:05:57.960 CC test/nvme/aer/aer.o 00:05:57.960 CC test/nvme/connect_stress/connect_stress.o 00:05:57.960 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:57.960 CC test/nvme/fdp/fdp.o 00:05:57.960 CC test/nvme/sgl/sgl.o 00:05:57.960 CC test/nvme/err_injection/err_injection.o 00:05:57.960 CC test/nvme/reserve/reserve.o 00:05:57.960 CC test/nvme/startup/startup.o 00:05:57.960 CC test/nvme/overhead/overhead.o 00:05:57.960 CC test/nvme/reset/reset.o 00:05:57.960 CC test/nvme/fused_ordering/fused_ordering.o 00:05:57.960 CC test/nvme/compliance/nvme_compliance.o 00:05:57.960 CC test/nvme/cuse/cuse.o 00:05:57.960 CC test/blobfs/mkfs/mkfs.o 00:05:57.960 CC test/accel/dif/dif.o 00:05:57.960 LINK memory_ut 00:05:57.960 CC examples/nvme/arbitration/arbitration.o 00:05:57.960 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:57.960 CC examples/nvme/reconnect/reconnect.o 00:05:57.960 CC examples/nvme/hotplug/hotplug.o 00:05:57.960 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:57.960 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:57.960 CC examples/nvme/hello_world/hello_world.o 00:05:57.960 CC examples/nvme/abort/abort.o 00:05:57.960 CC test/lvol/esnap/esnap.o 00:05:58.220 LINK reserve 00:05:58.220 LINK connect_stress 00:05:58.220 LINK boot_partition 00:05:58.220 LINK doorbell_aers 00:05:58.220 LINK nvme_dp 00:05:58.220 LINK err_injection 00:05:58.220 LINK startup 00:05:58.220 CC examples/accel/perf/accel_perf.o 00:05:58.220 CC examples/blob/cli/blobcli.o 00:05:58.220 LINK fused_ordering 00:05:58.220 LINK mkfs 00:05:58.220 LINK simple_copy 00:05:58.220 CC examples/blob/hello_world/hello_blob.o 00:05:58.220 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:58.220 LINK aer 00:05:58.220 LINK sgl 00:05:58.220 LINK reset 00:05:58.220 LINK pmr_persistence 00:05:58.220 LINK overhead 00:05:58.220 LINK cmb_copy 00:05:58.478 LINK nvme_compliance 00:05:58.478 LINK fdp 00:05:58.478 LINK hotplug 00:05:58.478 LINK hello_world 00:05:58.478 LINK arbitration 00:05:58.478 LINK reconnect 00:05:58.478 LINK abort 00:05:58.478 LINK hello_blob 00:05:58.478 LINK hello_fsdev 00:05:58.737 LINK nvme_manage 00:05:58.737 LINK iscsi_fuzz 00:05:58.737 LINK dif 00:05:58.737 LINK accel_perf 00:05:58.737 LINK blobcli 00:05:59.305 CC examples/bdev/hello_world/hello_bdev.o 00:05:59.305 CC examples/bdev/bdevperf/bdevperf.o 00:05:59.305 CC test/bdev/bdevio/bdevio.o 00:05:59.305 LINK cuse 00:05:59.564 LINK hello_bdev 00:05:59.823 LINK bdevio 00:05:59.823 LINK bdevperf 00:06:00.392 CC examples/nvmf/nvmf/nvmf.o 00:06:00.650 LINK nvmf 00:06:01.587 LINK esnap 00:06:01.847 00:06:01.847 real 1m4.151s 00:06:01.847 user 9m44.984s 00:06:01.847 sys 4m23.498s 00:06:01.847 12:12:33 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:01.847 12:12:33 make -- common/autotest_common.sh@10 -- $ set +x 00:06:01.847 ************************************ 00:06:01.847 END TEST make 00:06:01.847 ************************************ 00:06:02.106 12:12:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:02.106 12:12:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:02.106 12:12:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:02.106 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.106 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:02.106 12:12:33 -- pm/common@44 -- $ pid=4074564 00:06:02.106 12:12:33 -- pm/common@50 -- $ kill -TERM 4074564 00:06:02.106 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.106 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:02.106 12:12:33 -- pm/common@44 -- $ pid=4074566 00:06:02.106 12:12:33 -- pm/common@50 -- $ kill -TERM 4074566 00:06:02.106 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.106 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:02.106 12:12:33 -- pm/common@44 -- $ pid=4074567 00:06:02.106 12:12:33 -- pm/common@50 -- $ kill -TERM 4074567 00:06:02.106 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.106 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:02.106 12:12:33 -- pm/common@44 -- $ pid=4074597 00:06:02.106 12:12:33 -- pm/common@50 -- $ sudo -E kill -TERM 4074597 00:06:02.106 12:12:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:02.106 12:12:33 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:02.106 12:12:33 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.106 12:12:33 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.106 12:12:33 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.106 12:12:33 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.106 12:12:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.106 12:12:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.106 12:12:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.106 12:12:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.106 12:12:33 -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.106 12:12:33 -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.106 12:12:33 -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.106 12:12:33 -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.106 12:12:33 -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.106 12:12:33 -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.106 12:12:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.106 12:12:33 -- scripts/common.sh@344 -- # case "$op" in 00:06:02.106 12:12:33 -- scripts/common.sh@345 -- # : 1 00:06:02.106 12:12:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.106 12:12:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.106 12:12:33 -- scripts/common.sh@365 -- # decimal 1 00:06:02.106 12:12:33 -- scripts/common.sh@353 -- # local d=1 00:06:02.106 12:12:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.106 12:12:33 -- scripts/common.sh@355 -- # echo 1 00:06:02.106 12:12:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.106 12:12:33 -- scripts/common.sh@366 -- # decimal 2 00:06:02.106 12:12:33 -- scripts/common.sh@353 -- # local d=2 00:06:02.106 12:12:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.106 12:12:33 -- scripts/common.sh@355 -- # echo 2 00:06:02.106 12:12:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.106 12:12:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.106 12:12:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.106 12:12:33 -- scripts/common.sh@368 -- # return 0 00:06:02.107 12:12:33 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.107 12:12:33 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.107 --rc genhtml_branch_coverage=1 00:06:02.107 --rc genhtml_function_coverage=1 00:06:02.107 --rc genhtml_legend=1 00:06:02.107 --rc geninfo_all_blocks=1 00:06:02.107 --rc geninfo_unexecuted_blocks=1 00:06:02.107 00:06:02.107 ' 00:06:02.107 12:12:33 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.107 --rc genhtml_branch_coverage=1 00:06:02.107 --rc genhtml_function_coverage=1 00:06:02.107 --rc genhtml_legend=1 00:06:02.107 --rc geninfo_all_blocks=1 00:06:02.107 --rc geninfo_unexecuted_blocks=1 00:06:02.107 00:06:02.107 ' 00:06:02.107 12:12:33 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.107 --rc genhtml_branch_coverage=1 00:06:02.107 --rc genhtml_function_coverage=1 00:06:02.107 --rc genhtml_legend=1 00:06:02.107 --rc geninfo_all_blocks=1 00:06:02.107 --rc geninfo_unexecuted_blocks=1 00:06:02.107 00:06:02.107 ' 00:06:02.107 12:12:33 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.107 --rc genhtml_branch_coverage=1 00:06:02.107 --rc genhtml_function_coverage=1 00:06:02.107 --rc genhtml_legend=1 00:06:02.107 --rc geninfo_all_blocks=1 00:06:02.107 --rc geninfo_unexecuted_blocks=1 00:06:02.107 00:06:02.107 ' 00:06:02.107 12:12:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.107 12:12:33 -- nvmf/common.sh@7 -- # uname -s 00:06:02.107 12:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.107 12:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.107 12:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.107 12:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.107 12:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.107 12:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.107 12:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.107 12:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.107 12:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.107 12:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.107 12:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:02.107 12:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:02.107 12:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.107 12:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.107 12:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.107 12:12:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.107 12:12:33 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.107 12:12:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.366 12:12:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.366 12:12:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.366 12:12:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.366 12:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.366 12:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.366 12:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.366 12:12:33 -- paths/export.sh@5 -- # export PATH 00:06:02.366 12:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.366 12:12:33 -- nvmf/common.sh@51 -- # : 0 00:06:02.366 12:12:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.366 12:12:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.366 12:12:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.366 12:12:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.366 12:12:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.366 12:12:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.366 12:12:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.366 12:12:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.366 12:12:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.366 12:12:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:02.366 12:12:33 -- spdk/autotest.sh@32 -- # uname -s 00:06:02.366 12:12:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:02.366 12:12:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:02.366 12:12:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:02.366 12:12:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:02.366 12:12:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:02.366 12:12:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:02.366 12:12:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:02.366 12:12:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:02.366 12:12:33 -- spdk/autotest.sh@48 -- # udevadm_pid=4140581 00:06:02.366 12:12:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:02.366 12:12:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:02.366 12:12:33 -- pm/common@17 -- # local monitor 00:06:02.366 12:12:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.366 12:12:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.366 12:12:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.366 12:12:33 -- pm/common@21 -- # date +%s 00:06:02.366 12:12:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.366 12:12:33 -- pm/common@21 -- # date +%s 00:06:02.366 12:12:33 -- pm/common@25 -- # sleep 1 00:06:02.366 12:12:33 -- pm/common@21 -- # date +%s 00:06:02.366 12:12:33 -- pm/common@21 -- # date +%s 00:06:02.366 12:12:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730891553 00:06:02.366 12:12:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730891553 00:06:02.366 12:12:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730891553 00:06:02.366 12:12:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730891553 00:06:02.366 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730891553_collect-vmstat.pm.log 00:06:02.366 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730891553_collect-cpu-load.pm.log 00:06:02.366 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730891553_collect-cpu-temp.pm.log 00:06:02.366 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730891553_collect-bmc-pm.bmc.pm.log 00:06:03.304 12:12:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.304 12:12:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.304 12:12:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.304 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:03.304 12:12:34 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.304 12:12:34 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:03.304 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:03.304 12:12:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:03.304 12:12:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.304 12:12:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.304 12:12:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:03.304 12:12:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.304 12:12:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:03.304 12:12:34 -- common/autotest_common.sh@1455 -- # uname 00:06:03.304 12:12:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:03.304 12:12:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:03.304 12:12:34 -- common/autotest_common.sh@1475 -- # uname 00:06:03.304 12:12:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:03.304 12:12:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:03.304 12:12:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:03.304 lcov: LCOV version 1.15 00:06:03.304 12:12:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:21.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:21.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:43.337 12:13:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:43.337 12:13:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.337 12:13:11 -- common/autotest_common.sh@10 -- # set +x 00:06:43.337 12:13:11 -- spdk/autotest.sh@78 -- # rm -f 00:06:43.337 12:13:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:43.337 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:06:43.337 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:43.337 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:43.337 12:13:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:43.337 12:13:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:43.337 12:13:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:43.337 12:13:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:43.337 12:13:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:43.337 12:13:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:43.337 12:13:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:43.337 12:13:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:43.337 12:13:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:43.337 12:13:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:43.337 12:13:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:43.337 12:13:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:43.337 12:13:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:43.337 12:13:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:43.337 12:13:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:43.337 No valid GPT data, bailing 00:06:43.337 12:13:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:43.337 12:13:14 -- scripts/common.sh@394 -- # pt= 00:06:43.337 12:13:14 -- scripts/common.sh@395 -- # return 1 00:06:43.337 12:13:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:43.337 1+0 records in 00:06:43.337 1+0 records out 00:06:43.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551179 s, 190 MB/s 00:06:43.337 12:13:14 -- spdk/autotest.sh@105 -- # sync 00:06:43.337 12:13:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:43.337 12:13:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:43.337 12:13:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:48.610 12:13:19 -- spdk/autotest.sh@111 -- # uname -s 00:06:48.610 12:13:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:48.610 12:13:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:48.610 12:13:19 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:51.144 Hugepages 00:06:51.144 node hugesize free / total 00:06:51.144 node0 1048576kB 0 / 0 00:06:51.144 node0 2048kB 0 / 0 00:06:51.144 node1 1048576kB 0 / 0 00:06:51.144 node1 2048kB 0 / 0 00:06:51.144 00:06:51.144 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:51.144 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:51.144 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:51.144 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:51.144 12:13:22 -- spdk/autotest.sh@117 -- # uname -s 00:06:51.144 12:13:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:51.144 12:13:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:51.144 12:13:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:53.679 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:53.679 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:53.938 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:54.875 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:06:54.875 12:13:26 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:55.814 12:13:27 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:55.814 12:13:27 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:55.814 12:13:27 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:55.814 12:13:27 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:55.814 12:13:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:55.814 12:13:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:55.814 12:13:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:55.814 12:13:27 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:55.814 12:13:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:55.814 12:13:27 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:55.814 12:13:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:06:55.814 12:13:27 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:58.348 Waiting for block devices as requested 00:06:58.348 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:06:58.348 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:58.348 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:58.608 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:58.608 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:58.608 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:58.608 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:58.867 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:58.867 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:58.867 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:59.126 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:59.126 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:59.126 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:59.126 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:59.385 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:59.385 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:59.385 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:59.645 12:13:31 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:59.645 12:13:31 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1485 -- # grep 0000:86:00.0/nvme/nvme 00:06:59.645 12:13:31 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:06:59.645 12:13:31 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:59.645 12:13:31 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:59.645 12:13:31 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:59.645 12:13:31 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:06:59.645 12:13:31 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:59.645 12:13:31 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:59.645 12:13:31 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:59.645 12:13:31 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:59.645 12:13:31 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:59.645 12:13:31 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:59.645 12:13:31 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:59.645 12:13:31 -- common/autotest_common.sh@1541 -- # continue 00:06:59.645 12:13:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:59.645 12:13:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.645 12:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:59.645 12:13:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:59.645 12:13:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.645 12:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:59.645 12:13:31 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:02.937 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:02.937 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:02.938 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:02.938 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:03.507 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:07:03.507 12:13:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:03.507 12:13:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.507 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.507 12:13:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:03.507 12:13:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:03.507 12:13:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:03.507 12:13:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:03.507 12:13:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:03.507 12:13:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:03.507 12:13:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:03.507 12:13:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:03.507 12:13:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:03.507 12:13:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:03.507 12:13:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:03.507 12:13:35 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:03.507 12:13:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:03.766 12:13:35 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:03.766 12:13:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:07:03.766 12:13:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:03.766 12:13:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:07:03.766 12:13:35 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:07:03.766 12:13:35 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:03.766 12:13:35 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:07:03.766 12:13:35 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:07:03.766 12:13:35 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:86:00.0 00:07:03.766 12:13:35 -- common/autotest_common.sh@1577 -- # [[ -z 0000:86:00.0 ]] 00:07:03.766 12:13:35 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=4157972 00:07:03.766 12:13:35 -- common/autotest_common.sh@1583 -- # waitforlisten 4157972 00:07:03.766 12:13:35 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.766 12:13:35 -- common/autotest_common.sh@833 -- # '[' -z 4157972 ']' 00:07:03.766 12:13:35 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.766 12:13:35 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.766 12:13:35 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.766 12:13:35 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.766 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.766 [2024-11-06 12:13:35.205017] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:03.766 [2024-11-06 12:13:35.205078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157972 ] 00:07:03.766 [2024-11-06 12:13:35.297048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.766 [2024-11-06 12:13:35.345553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.025 12:13:35 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.025 12:13:35 -- common/autotest_common.sh@866 -- # return 0 00:07:04.025 12:13:35 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:04.025 12:13:35 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:04.025 12:13:35 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:07:07.314 nvme0n1 00:07:07.314 12:13:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:07.573 [2024-11-06 12:13:38.943878] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:07.573 request: 00:07:07.573 { 00:07:07.573 "nvme_ctrlr_name": "nvme0", 00:07:07.573 "password": "test", 00:07:07.573 "method": "bdev_nvme_opal_revert", 00:07:07.573 "req_id": 1 00:07:07.573 } 00:07:07.573 Got JSON-RPC error response 00:07:07.573 response: 00:07:07.573 { 00:07:07.574 "code": -32602, 00:07:07.574 "message": "Invalid parameters" 00:07:07.574 } 00:07:07.574 12:13:38 -- common/autotest_common.sh@1589 -- # true 00:07:07.574 12:13:38 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:07.574 12:13:38 -- common/autotest_common.sh@1593 -- # killprocess 4157972 00:07:07.574 12:13:38 -- common/autotest_common.sh@952 -- # '[' -z 4157972 ']' 00:07:07.574 12:13:38 -- common/autotest_common.sh@956 -- # kill -0 4157972 00:07:07.574 12:13:38 -- common/autotest_common.sh@957 -- # uname 00:07:07.574 12:13:38 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:07.574 12:13:38 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4157972 00:07:07.574 12:13:39 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:07.574 12:13:39 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:07.574 12:13:39 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4157972' 00:07:07.574 killing process with pid 4157972 00:07:07.574 12:13:39 -- common/autotest_common.sh@971 -- # kill 4157972 00:07:07.574 12:13:39 -- common/autotest_common.sh@976 -- # wait 4157972 00:07:09.479 12:13:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:09.479 12:13:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:09.479 12:13:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:09.479 12:13:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:09.479 12:13:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:09.479 12:13:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.479 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.479 12:13:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:09.479 12:13:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:09.479 12:13:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.479 12:13:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.479 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.479 ************************************ 00:07:09.479 START TEST env 00:07:09.479 ************************************ 00:07:09.479 12:13:40 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:09.479 * Looking for test storage... 00:07:09.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:09.479 12:13:40 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.479 12:13:40 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.479 12:13:40 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.479 12:13:40 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.479 12:13:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.479 12:13:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.479 12:13:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.479 12:13:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.479 12:13:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.480 12:13:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.480 12:13:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.480 12:13:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.480 12:13:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.480 12:13:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.480 12:13:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.480 12:13:40 env -- scripts/common.sh@344 -- # case "$op" in 00:07:09.480 12:13:40 env -- scripts/common.sh@345 -- # : 1 00:07:09.480 12:13:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.480 12:13:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.480 12:13:40 env -- scripts/common.sh@365 -- # decimal 1 00:07:09.480 12:13:40 env -- scripts/common.sh@353 -- # local d=1 00:07:09.480 12:13:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.480 12:13:40 env -- scripts/common.sh@355 -- # echo 1 00:07:09.480 12:13:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.480 12:13:40 env -- scripts/common.sh@366 -- # decimal 2 00:07:09.480 12:13:40 env -- scripts/common.sh@353 -- # local d=2 00:07:09.480 12:13:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.480 12:13:40 env -- scripts/common.sh@355 -- # echo 2 00:07:09.480 12:13:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.480 12:13:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.480 12:13:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.480 12:13:40 env -- scripts/common.sh@368 -- # return 0 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.480 --rc genhtml_branch_coverage=1 00:07:09.480 --rc genhtml_function_coverage=1 00:07:09.480 --rc genhtml_legend=1 00:07:09.480 --rc geninfo_all_blocks=1 00:07:09.480 --rc geninfo_unexecuted_blocks=1 00:07:09.480 00:07:09.480 ' 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.480 --rc genhtml_branch_coverage=1 00:07:09.480 --rc genhtml_function_coverage=1 00:07:09.480 --rc genhtml_legend=1 00:07:09.480 --rc geninfo_all_blocks=1 00:07:09.480 --rc geninfo_unexecuted_blocks=1 00:07:09.480 00:07:09.480 ' 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.480 --rc genhtml_branch_coverage=1 00:07:09.480 --rc genhtml_function_coverage=1 00:07:09.480 --rc genhtml_legend=1 00:07:09.480 --rc geninfo_all_blocks=1 00:07:09.480 --rc geninfo_unexecuted_blocks=1 00:07:09.480 00:07:09.480 ' 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.480 --rc genhtml_branch_coverage=1 00:07:09.480 --rc genhtml_function_coverage=1 00:07:09.480 --rc genhtml_legend=1 00:07:09.480 --rc geninfo_all_blocks=1 00:07:09.480 --rc geninfo_unexecuted_blocks=1 00:07:09.480 00:07:09.480 ' 00:07:09.480 12:13:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.480 12:13:40 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.480 12:13:40 env -- common/autotest_common.sh@10 -- # set +x 00:07:09.480 ************************************ 00:07:09.480 START TEST env_memory 00:07:09.480 ************************************ 00:07:09.480 12:13:40 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:09.480 00:07:09.480 00:07:09.480 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.480 http://cunit.sourceforge.net/ 00:07:09.480 00:07:09.480 00:07:09.480 Suite: memory 00:07:09.480 Test: alloc and free memory map ...[2024-11-06 12:13:41.035720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:09.480 passed 00:07:09.480 Test: mem map translation ...[2024-11-06 12:13:41.064919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:09.480 [2024-11-06 12:13:41.064941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:09.480 [2024-11-06 12:13:41.064993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:09.480 [2024-11-06 12:13:41.065003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:09.740 passed 00:07:09.740 Test: mem map registration ...[2024-11-06 12:13:41.124808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:09.740 [2024-11-06 12:13:41.124829] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:09.740 passed 00:07:09.740 Test: mem map adjacent registrations ...passed 00:07:09.740 00:07:09.740 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.740 suites 1 1 n/a 0 0 00:07:09.740 tests 4 4 4 0 0 00:07:09.740 asserts 152 152 152 0 n/a 00:07:09.740 00:07:09.740 Elapsed time = 0.204 seconds 00:07:09.740 00:07:09.740 real 0m0.218s 00:07:09.740 user 0m0.206s 00:07:09.740 sys 0m0.011s 00:07:09.740 12:13:41 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.740 12:13:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:09.740 ************************************ 00:07:09.740 END TEST env_memory 00:07:09.740 ************************************ 00:07:09.740 12:13:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:09.740 12:13:41 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.740 12:13:41 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.740 12:13:41 env -- common/autotest_common.sh@10 -- # set +x 00:07:09.740 ************************************ 00:07:09.740 START TEST env_vtophys 00:07:09.740 ************************************ 00:07:09.740 12:13:41 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:09.740 EAL: lib.eal log level changed from notice to debug 00:07:09.740 EAL: Detected lcore 0 as core 0 on socket 0 00:07:09.740 EAL: Detected lcore 1 as core 1 on socket 0 00:07:09.740 EAL: Detected lcore 2 as core 2 on socket 0 00:07:09.740 EAL: Detected lcore 3 as core 3 on socket 0 00:07:09.740 EAL: Detected lcore 4 as core 4 on socket 0 00:07:09.740 EAL: Detected lcore 5 as core 5 on socket 0 00:07:09.740 EAL: Detected lcore 6 as core 6 on socket 0 00:07:09.740 EAL: Detected lcore 7 as core 8 on socket 0 00:07:09.740 EAL: Detected lcore 8 as core 9 on socket 0 00:07:09.740 EAL: Detected lcore 9 as core 10 on socket 0 00:07:09.740 EAL: Detected lcore 10 as core 11 on socket 0 00:07:09.740 EAL: Detected lcore 11 as core 12 on socket 0 00:07:09.740 EAL: Detected lcore 12 as core 13 on socket 0 00:07:09.740 EAL: Detected lcore 13 as core 14 on socket 0 00:07:09.740 EAL: Detected lcore 14 as core 16 on socket 0 00:07:09.740 EAL: Detected lcore 15 as core 17 on socket 0 00:07:09.740 EAL: Detected lcore 16 as core 18 on socket 0 00:07:09.740 EAL: Detected lcore 17 as core 19 on socket 0 00:07:09.740 EAL: Detected lcore 18 as core 20 on socket 0 00:07:09.740 EAL: Detected lcore 19 as core 21 on socket 0 00:07:09.740 EAL: Detected lcore 20 as core 22 on socket 0 00:07:09.740 EAL: Detected lcore 21 as core 24 on socket 0 00:07:09.740 EAL: Detected lcore 22 as core 25 on socket 0 00:07:09.740 EAL: Detected lcore 23 as core 26 on socket 0 00:07:09.740 EAL: Detected lcore 24 as core 27 on socket 0 00:07:09.740 EAL: Detected lcore 25 as core 28 on socket 0 00:07:09.740 EAL: Detected lcore 26 as core 29 on socket 0 00:07:09.740 EAL: Detected lcore 27 as core 30 on socket 0 00:07:09.740 EAL: Detected lcore 28 as core 0 on socket 1 00:07:09.740 EAL: Detected lcore 29 as core 1 on socket 1 00:07:09.740 EAL: Detected lcore 30 as core 2 on socket 1 00:07:09.740 EAL: Detected lcore 31 as core 3 on socket 1 00:07:09.740 EAL: Detected lcore 32 as core 4 on socket 1 00:07:09.740 EAL: Detected lcore 33 as core 5 on socket 1 00:07:09.740 EAL: Detected lcore 34 as core 6 on socket 1 00:07:09.740 EAL: Detected lcore 35 as core 8 on socket 1 00:07:09.740 EAL: Detected lcore 36 as core 9 on socket 1 00:07:09.740 EAL: Detected lcore 37 as core 10 on socket 1 00:07:09.740 EAL: Detected lcore 38 as core 11 on socket 1 00:07:09.740 EAL: Detected lcore 39 as core 12 on socket 1 00:07:09.740 EAL: Detected lcore 40 as core 13 on socket 1 00:07:09.740 EAL: Detected lcore 41 as core 14 on socket 1 00:07:09.740 EAL: Detected lcore 42 as core 16 on socket 1 00:07:09.740 EAL: Detected lcore 43 as core 17 on socket 1 00:07:09.740 EAL: Detected lcore 44 as core 18 on socket 1 00:07:09.740 EAL: Detected lcore 45 as core 19 on socket 1 00:07:09.740 EAL: Detected lcore 46 as core 20 on socket 1 00:07:09.740 EAL: Detected lcore 47 as core 21 on socket 1 00:07:09.740 EAL: Detected lcore 48 as core 22 on socket 1 00:07:09.740 EAL: Detected lcore 49 as core 24 on socket 1 00:07:09.740 EAL: Detected lcore 50 as core 25 on socket 1 00:07:09.740 EAL: Detected lcore 51 as core 26 on socket 1 00:07:09.740 EAL: Detected lcore 52 as core 27 on socket 1 00:07:09.740 EAL: Detected lcore 53 as core 28 on socket 1 00:07:09.740 EAL: Detected lcore 54 as core 29 on socket 1 00:07:09.740 EAL: Detected lcore 55 as core 30 on socket 1 00:07:09.740 EAL: Detected lcore 56 as core 0 on socket 0 00:07:09.740 EAL: Detected lcore 57 as core 1 on socket 0 00:07:09.740 EAL: Detected lcore 58 as core 2 on socket 0 00:07:09.740 EAL: Detected lcore 59 as core 3 on socket 0 00:07:09.740 EAL: Detected lcore 60 as core 4 on socket 0 00:07:09.740 EAL: Detected lcore 61 as core 5 on socket 0 00:07:09.740 EAL: Detected lcore 62 as core 6 on socket 0 00:07:09.740 EAL: Detected lcore 63 as core 8 on socket 0 00:07:09.740 EAL: Detected lcore 64 as core 9 on socket 0 00:07:09.740 EAL: Detected lcore 65 as core 10 on socket 0 00:07:09.740 EAL: Detected lcore 66 as core 11 on socket 0 00:07:09.740 EAL: Detected lcore 67 as core 12 on socket 0 00:07:09.740 EAL: Detected lcore 68 as core 13 on socket 0 00:07:09.740 EAL: Detected lcore 69 as core 14 on socket 0 00:07:09.740 EAL: Detected lcore 70 as core 16 on socket 0 00:07:09.740 EAL: Detected lcore 71 as core 17 on socket 0 00:07:09.740 EAL: Detected lcore 72 as core 18 on socket 0 00:07:09.740 EAL: Detected lcore 73 as core 19 on socket 0 00:07:09.740 EAL: Detected lcore 74 as core 20 on socket 0 00:07:09.740 EAL: Detected lcore 75 as core 21 on socket 0 00:07:09.740 EAL: Detected lcore 76 as core 22 on socket 0 00:07:09.740 EAL: Detected lcore 77 as core 24 on socket 0 00:07:09.740 EAL: Detected lcore 78 as core 25 on socket 0 00:07:09.740 EAL: Detected lcore 79 as core 26 on socket 0 00:07:09.740 EAL: Detected lcore 80 as core 27 on socket 0 00:07:09.740 EAL: Detected lcore 81 as core 28 on socket 0 00:07:09.740 EAL: Detected lcore 82 as core 29 on socket 0 00:07:09.740 EAL: Detected lcore 83 as core 30 on socket 0 00:07:09.740 EAL: Detected lcore 84 as core 0 on socket 1 00:07:09.740 EAL: Detected lcore 85 as core 1 on socket 1 00:07:09.740 EAL: Detected lcore 86 as core 2 on socket 1 00:07:09.740 EAL: Detected lcore 87 as core 3 on socket 1 00:07:09.740 EAL: Detected lcore 88 as core 4 on socket 1 00:07:09.740 EAL: Detected lcore 89 as core 5 on socket 1 00:07:09.740 EAL: Detected lcore 90 as core 6 on socket 1 00:07:09.740 EAL: Detected lcore 91 as core 8 on socket 1 00:07:09.740 EAL: Detected lcore 92 as core 9 on socket 1 00:07:09.740 EAL: Detected lcore 93 as core 10 on socket 1 00:07:09.740 EAL: Detected lcore 94 as core 11 on socket 1 00:07:09.740 EAL: Detected lcore 95 as core 12 on socket 1 00:07:09.740 EAL: Detected lcore 96 as core 13 on socket 1 00:07:09.740 EAL: Detected lcore 97 as core 14 on socket 1 00:07:09.740 EAL: Detected lcore 98 as core 16 on socket 1 00:07:09.740 EAL: Detected lcore 99 as core 17 on socket 1 00:07:09.740 EAL: Detected lcore 100 as core 18 on socket 1 00:07:09.740 EAL: Detected lcore 101 as core 19 on socket 1 00:07:09.740 EAL: Detected lcore 102 as core 20 on socket 1 00:07:09.740 EAL: Detected lcore 103 as core 21 on socket 1 00:07:09.740 EAL: Detected lcore 104 as core 22 on socket 1 00:07:09.740 EAL: Detected lcore 105 as core 24 on socket 1 00:07:09.740 EAL: Detected lcore 106 as core 25 on socket 1 00:07:09.740 EAL: Detected lcore 107 as core 26 on socket 1 00:07:09.740 EAL: Detected lcore 108 as core 27 on socket 1 00:07:09.740 EAL: Detected lcore 109 as core 28 on socket 1 00:07:09.740 EAL: Detected lcore 110 as core 29 on socket 1 00:07:09.740 EAL: Detected lcore 111 as core 30 on socket 1 00:07:09.740 EAL: Maximum logical cores by configuration: 128 00:07:09.740 EAL: Detected CPU lcores: 112 00:07:09.740 EAL: Detected NUMA nodes: 2 00:07:09.740 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:09.740 EAL: Detected shared linkage of DPDK 00:07:09.740 EAL: No shared files mode enabled, IPC will be disabled 00:07:09.740 EAL: Bus pci wants IOVA as 'DC' 00:07:09.740 EAL: Buses did not request a specific IOVA mode. 00:07:09.740 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:09.740 EAL: Selected IOVA mode 'VA' 00:07:09.740 EAL: Probing VFIO support... 00:07:09.740 EAL: IOMMU type 1 (Type 1) is supported 00:07:09.740 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:09.740 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:09.740 EAL: VFIO support initialized 00:07:09.740 EAL: Ask a virtual area of 0x2e000 bytes 00:07:09.740 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:09.740 EAL: Setting up physically contiguous memory... 00:07:09.740 EAL: Setting maximum number of open files to 524288 00:07:09.740 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:09.740 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:09.740 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:09.740 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.740 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:09.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:09.740 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.740 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:09.740 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:09.740 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.740 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:09.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:09.741 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:09.741 EAL: Ask a virtual area of 0x61000 bytes 00:07:09.741 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:09.741 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:09.741 EAL: Ask a virtual area of 0x400000000 bytes 00:07:09.741 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:09.741 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:09.741 EAL: Hugepages will be freed exactly as allocated. 00:07:09.741 EAL: No shared files mode enabled, IPC is disabled 00:07:09.741 EAL: No shared files mode enabled, IPC is disabled 00:07:09.741 EAL: TSC frequency is ~2200000 KHz 00:07:09.741 EAL: Main lcore 0 is ready (tid=7f64aaa44a00;cpuset=[0]) 00:07:09.741 EAL: Trying to obtain current memory policy. 00:07:09.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:09.741 EAL: Restoring previous memory policy: 0 00:07:09.741 EAL: request: mp_malloc_sync 00:07:09.741 EAL: No shared files mode enabled, IPC is disabled 00:07:09.741 EAL: Heap on socket 0 was expanded by 2MB 00:07:09.741 EAL: No shared files mode enabled, IPC is disabled 00:07:09.741 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:09.741 EAL: Mem event callback 'spdk:(nil)' registered 00:07:10.001 00:07:10.001 00:07:10.001 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.001 http://cunit.sourceforge.net/ 00:07:10.001 00:07:10.001 00:07:10.001 Suite: components_suite 00:07:10.001 Test: vtophys_malloc_test ...passed 00:07:10.001 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 4MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 4MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 6MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 6MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 10MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 10MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 18MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 18MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 34MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 34MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 66MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 66MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 130MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 130MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.001 EAL: Restoring previous memory policy: 4 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was expanded by 258MB 00:07:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.001 EAL: request: mp_malloc_sync 00:07:10.001 EAL: No shared files mode enabled, IPC is disabled 00:07:10.001 EAL: Heap on socket 0 was shrunk by 258MB 00:07:10.001 EAL: Trying to obtain current memory policy. 00:07:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.260 EAL: Restoring previous memory policy: 4 00:07:10.260 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.260 EAL: request: mp_malloc_sync 00:07:10.260 EAL: No shared files mode enabled, IPC is disabled 00:07:10.260 EAL: Heap on socket 0 was expanded by 514MB 00:07:10.260 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.519 EAL: request: mp_malloc_sync 00:07:10.519 EAL: No shared files mode enabled, IPC is disabled 00:07:10.519 EAL: Heap on socket 0 was shrunk by 514MB 00:07:10.519 EAL: Trying to obtain current memory policy. 00:07:10.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.519 EAL: Restoring previous memory policy: 4 00:07:10.519 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.519 EAL: request: mp_malloc_sync 00:07:10.519 EAL: No shared files mode enabled, IPC is disabled 00:07:10.519 EAL: Heap on socket 0 was expanded by 1026MB 00:07:10.778 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.037 EAL: request: mp_malloc_sync 00:07:11.037 EAL: No shared files mode enabled, IPC is disabled 00:07:11.037 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:11.037 passed 00:07:11.037 00:07:11.037 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.037 suites 1 1 n/a 0 0 00:07:11.037 tests 2 2 2 0 0 00:07:11.037 asserts 497 497 497 0 n/a 00:07:11.037 00:07:11.037 Elapsed time = 1.019 seconds 00:07:11.037 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.037 EAL: request: mp_malloc_sync 00:07:11.037 EAL: No shared files mode enabled, IPC is disabled 00:07:11.037 EAL: Heap on socket 0 was shrunk by 2MB 00:07:11.037 EAL: No shared files mode enabled, IPC is disabled 00:07:11.037 EAL: No shared files mode enabled, IPC is disabled 00:07:11.037 EAL: No shared files mode enabled, IPC is disabled 00:07:11.037 00:07:11.037 real 0m1.166s 00:07:11.037 user 0m0.687s 00:07:11.037 sys 0m0.452s 00:07:11.037 12:13:42 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.037 12:13:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:11.037 ************************************ 00:07:11.037 END TEST env_vtophys 00:07:11.037 ************************************ 00:07:11.037 12:13:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:11.037 12:13:42 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:11.037 12:13:42 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.037 12:13:42 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.037 ************************************ 00:07:11.037 START TEST env_pci 00:07:11.037 ************************************ 00:07:11.037 12:13:42 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:11.037 00:07:11.037 00:07:11.037 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.037 http://cunit.sourceforge.net/ 00:07:11.037 00:07:11.037 00:07:11.037 Suite: pci 00:07:11.037 Test: pci_hook ...[2024-11-06 12:13:42.498538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4159458 has claimed it 00:07:11.037 EAL: Cannot find device (10000:00:01.0) 00:07:11.037 EAL: Failed to attach device on primary process 00:07:11.037 passed 00:07:11.037 00:07:11.037 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.037 suites 1 1 n/a 0 0 00:07:11.037 tests 1 1 1 0 0 00:07:11.037 asserts 25 25 25 0 n/a 00:07:11.037 00:07:11.037 Elapsed time = 0.028 seconds 00:07:11.037 00:07:11.037 real 0m0.048s 00:07:11.037 user 0m0.013s 00:07:11.037 sys 0m0.035s 00:07:11.037 12:13:42 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.037 12:13:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:11.037 ************************************ 00:07:11.037 END TEST env_pci 00:07:11.037 ************************************ 00:07:11.037 12:13:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:11.037 12:13:42 env -- env/env.sh@15 -- # uname 00:07:11.037 12:13:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:11.037 12:13:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:11.037 12:13:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.037 12:13:42 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:11.037 12:13:42 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.037 12:13:42 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.037 ************************************ 00:07:11.037 START TEST env_dpdk_post_init 00:07:11.037 ************************************ 00:07:11.037 12:13:42 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.037 EAL: Detected CPU lcores: 112 00:07:11.037 EAL: Detected NUMA nodes: 2 00:07:11.037 EAL: Detected shared linkage of DPDK 00:07:11.037 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:11.037 EAL: Selected IOVA mode 'VA' 00:07:11.037 EAL: VFIO support initialized 00:07:11.037 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:11.297 EAL: Using IOMMU type 1 (Type 1) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:11.297 EAL: Ignore mapping IO port bar(1) 00:07:11.297 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:11.556 EAL: Ignore mapping IO port bar(1) 00:07:11.556 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:11.556 EAL: Ignore mapping IO port bar(1) 00:07:11.556 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:11.556 EAL: Ignore mapping IO port bar(1) 00:07:11.556 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:12.261 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:07:15.635 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:07:15.635 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:07:15.635 Starting DPDK initialization... 00:07:15.635 Starting SPDK post initialization... 00:07:15.635 SPDK NVMe probe 00:07:15.635 Attaching to 0000:86:00.0 00:07:15.635 Attached to 0000:86:00.0 00:07:15.635 Cleaning up... 00:07:15.635 00:07:15.635 real 0m4.458s 00:07:15.635 user 0m3.043s 00:07:15.635 sys 0m0.474s 00:07:15.635 12:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.635 12:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:15.635 ************************************ 00:07:15.635 END TEST env_dpdk_post_init 00:07:15.635 ************************************ 00:07:15.635 12:13:47 env -- env/env.sh@26 -- # uname 00:07:15.635 12:13:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:15.635 12:13:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:15.635 12:13:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.635 12:13:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.635 12:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.635 ************************************ 00:07:15.635 START TEST env_mem_callbacks 00:07:15.635 ************************************ 00:07:15.635 12:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:15.635 EAL: Detected CPU lcores: 112 00:07:15.635 EAL: Detected NUMA nodes: 2 00:07:15.635 EAL: Detected shared linkage of DPDK 00:07:15.635 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:15.635 EAL: Selected IOVA mode 'VA' 00:07:15.635 EAL: VFIO support initialized 00:07:15.635 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:15.635 00:07:15.635 00:07:15.635 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.635 http://cunit.sourceforge.net/ 00:07:15.635 00:07:15.635 00:07:15.635 Suite: memory 00:07:15.635 Test: test ... 00:07:15.635 register 0x200000200000 2097152 00:07:15.635 malloc 3145728 00:07:15.635 register 0x200000400000 4194304 00:07:15.635 buf 0x200000500000 len 3145728 PASSED 00:07:15.635 malloc 64 00:07:15.635 buf 0x2000004fff40 len 64 PASSED 00:07:15.635 malloc 4194304 00:07:15.635 register 0x200000800000 6291456 00:07:15.635 buf 0x200000a00000 len 4194304 PASSED 00:07:15.635 free 0x200000500000 3145728 00:07:15.635 free 0x2000004fff40 64 00:07:15.635 unregister 0x200000400000 4194304 PASSED 00:07:15.635 free 0x200000a00000 4194304 00:07:15.635 unregister 0x200000800000 6291456 PASSED 00:07:15.635 malloc 8388608 00:07:15.635 register 0x200000400000 10485760 00:07:15.635 buf 0x200000600000 len 8388608 PASSED 00:07:15.635 free 0x200000600000 8388608 00:07:15.635 unregister 0x200000400000 10485760 PASSED 00:07:15.635 passed 00:07:15.635 00:07:15.635 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.635 suites 1 1 n/a 0 0 00:07:15.635 tests 1 1 1 0 0 00:07:15.635 asserts 15 15 15 0 n/a 00:07:15.635 00:07:15.635 Elapsed time = 0.007 seconds 00:07:15.635 00:07:15.635 real 0m0.063s 00:07:15.635 user 0m0.022s 00:07:15.635 sys 0m0.041s 00:07:15.635 12:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.635 12:13:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:15.635 ************************************ 00:07:15.635 END TEST env_mem_callbacks 00:07:15.635 ************************************ 00:07:15.635 00:07:15.635 real 0m6.454s 00:07:15.635 user 0m4.194s 00:07:15.635 sys 0m1.325s 00:07:15.635 12:13:47 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.635 12:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.635 ************************************ 00:07:15.635 END TEST env 00:07:15.635 ************************************ 00:07:15.894 12:13:47 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:15.894 12:13:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.894 12:13:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.894 12:13:47 -- common/autotest_common.sh@10 -- # set +x 00:07:15.894 ************************************ 00:07:15.894 START TEST rpc 00:07:15.894 ************************************ 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:15.895 * Looking for test storage... 00:07:15.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.895 12:13:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.895 12:13:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.895 12:13:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.895 12:13:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.895 12:13:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.895 12:13:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:15.895 12:13:47 rpc -- scripts/common.sh@345 -- # : 1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.895 12:13:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.895 12:13:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@353 -- # local d=1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.895 12:13:47 rpc -- scripts/common.sh@355 -- # echo 1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.895 12:13:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@353 -- # local d=2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.895 12:13:47 rpc -- scripts/common.sh@355 -- # echo 2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.895 12:13:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.895 12:13:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.895 12:13:47 rpc -- scripts/common.sh@368 -- # return 0 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:15.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.895 --rc genhtml_branch_coverage=1 00:07:15.895 --rc genhtml_function_coverage=1 00:07:15.895 --rc genhtml_legend=1 00:07:15.895 --rc geninfo_all_blocks=1 00:07:15.895 --rc geninfo_unexecuted_blocks=1 00:07:15.895 00:07:15.895 ' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:15.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.895 --rc genhtml_branch_coverage=1 00:07:15.895 --rc genhtml_function_coverage=1 00:07:15.895 --rc genhtml_legend=1 00:07:15.895 --rc geninfo_all_blocks=1 00:07:15.895 --rc geninfo_unexecuted_blocks=1 00:07:15.895 00:07:15.895 ' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:15.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.895 --rc genhtml_branch_coverage=1 00:07:15.895 --rc genhtml_function_coverage=1 00:07:15.895 --rc genhtml_legend=1 00:07:15.895 --rc geninfo_all_blocks=1 00:07:15.895 --rc geninfo_unexecuted_blocks=1 00:07:15.895 00:07:15.895 ' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:15.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.895 --rc genhtml_branch_coverage=1 00:07:15.895 --rc genhtml_function_coverage=1 00:07:15.895 --rc genhtml_legend=1 00:07:15.895 --rc geninfo_all_blocks=1 00:07:15.895 --rc geninfo_unexecuted_blocks=1 00:07:15.895 00:07:15.895 ' 00:07:15.895 12:13:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4160398 00:07:15.895 12:13:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.895 12:13:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4160398 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@833 -- # '[' -z 4160398 ']' 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.895 12:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.895 12:13:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:16.155 [2024-11-06 12:13:47.534303] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:16.155 [2024-11-06 12:13:47.534362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160398 ] 00:07:16.155 [2024-11-06 12:13:47.628009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.155 [2024-11-06 12:13:47.677234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:16.155 [2024-11-06 12:13:47.677277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4160398' to capture a snapshot of events at runtime. 00:07:16.155 [2024-11-06 12:13:47.677287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.155 [2024-11-06 12:13:47.677296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.155 [2024-11-06 12:13:47.677304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4160398 for offline analysis/debug. 00:07:16.155 [2024-11-06 12:13:47.678032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.415 12:13:47 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.415 12:13:47 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:16.415 12:13:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:16.415 12:13:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:16.415 12:13:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:16.415 12:13:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:16.415 12:13:47 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:16.415 12:13:47 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.415 12:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.415 ************************************ 00:07:16.415 START TEST rpc_integrity 00:07:16.415 ************************************ 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:16.415 12:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.415 12:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:16.415 12:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:16.415 12:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:16.415 12:13:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.415 12:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.415 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.415 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:16.415 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:16.415 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.415 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.415 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.415 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:16.415 { 00:07:16.415 "name": "Malloc0", 00:07:16.415 "aliases": [ 00:07:16.415 "45c77eb7-f992-4a4d-8295-23ca1c219cd4" 00:07:16.415 ], 00:07:16.415 "product_name": "Malloc disk", 00:07:16.415 "block_size": 512, 00:07:16.415 "num_blocks": 16384, 00:07:16.415 "uuid": "45c77eb7-f992-4a4d-8295-23ca1c219cd4", 00:07:16.415 "assigned_rate_limits": { 00:07:16.415 "rw_ios_per_sec": 0, 00:07:16.415 "rw_mbytes_per_sec": 0, 00:07:16.415 "r_mbytes_per_sec": 0, 00:07:16.415 "w_mbytes_per_sec": 0 00:07:16.415 }, 00:07:16.415 "claimed": false, 00:07:16.415 "zoned": false, 00:07:16.415 "supported_io_types": { 00:07:16.415 "read": true, 00:07:16.415 "write": true, 00:07:16.415 "unmap": true, 00:07:16.415 "flush": true, 00:07:16.415 "reset": true, 00:07:16.415 "nvme_admin": false, 00:07:16.415 "nvme_io": false, 00:07:16.415 "nvme_io_md": false, 00:07:16.415 "write_zeroes": true, 00:07:16.415 "zcopy": true, 00:07:16.415 "get_zone_info": false, 00:07:16.415 "zone_management": false, 00:07:16.415 "zone_append": false, 00:07:16.415 "compare": false, 00:07:16.415 "compare_and_write": false, 00:07:16.415 "abort": true, 00:07:16.415 "seek_hole": false, 00:07:16.415 "seek_data": false, 00:07:16.415 "copy": true, 00:07:16.415 "nvme_iov_md": false 00:07:16.415 }, 00:07:16.415 "memory_domains": [ 00:07:16.415 { 00:07:16.415 "dma_device_id": "system", 00:07:16.415 "dma_device_type": 1 00:07:16.415 }, 00:07:16.415 { 00:07:16.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.415 "dma_device_type": 2 00:07:16.415 } 00:07:16.415 ], 00:07:16.415 "driver_specific": {} 00:07:16.415 } 00:07:16.415 ]' 00:07:16.415 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:16.674 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:16.674 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.674 [2024-11-06 12:13:48.065235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:16.674 [2024-11-06 12:13:48.065271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.674 [2024-11-06 12:13:48.065288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66bc10 00:07:16.674 [2024-11-06 12:13:48.065297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.674 [2024-11-06 12:13:48.066865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.674 [2024-11-06 12:13:48.066891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:16.674 Passthru0 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.674 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.674 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.674 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:16.674 { 00:07:16.674 "name": "Malloc0", 00:07:16.674 "aliases": [ 00:07:16.674 "45c77eb7-f992-4a4d-8295-23ca1c219cd4" 00:07:16.674 ], 00:07:16.674 "product_name": "Malloc disk", 00:07:16.674 "block_size": 512, 00:07:16.674 "num_blocks": 16384, 00:07:16.674 "uuid": "45c77eb7-f992-4a4d-8295-23ca1c219cd4", 00:07:16.674 "assigned_rate_limits": { 00:07:16.674 "rw_ios_per_sec": 0, 00:07:16.674 "rw_mbytes_per_sec": 0, 00:07:16.674 "r_mbytes_per_sec": 0, 00:07:16.674 "w_mbytes_per_sec": 0 00:07:16.674 }, 00:07:16.674 "claimed": true, 00:07:16.674 "claim_type": "exclusive_write", 00:07:16.674 "zoned": false, 00:07:16.674 "supported_io_types": { 00:07:16.674 "read": true, 00:07:16.674 "write": true, 00:07:16.674 "unmap": true, 00:07:16.674 "flush": true, 00:07:16.674 "reset": true, 00:07:16.674 "nvme_admin": false, 00:07:16.674 "nvme_io": false, 00:07:16.674 "nvme_io_md": false, 00:07:16.674 "write_zeroes": true, 00:07:16.674 "zcopy": true, 00:07:16.674 "get_zone_info": false, 00:07:16.674 "zone_management": false, 00:07:16.674 "zone_append": false, 00:07:16.674 "compare": false, 00:07:16.674 "compare_and_write": false, 00:07:16.674 "abort": true, 00:07:16.674 "seek_hole": false, 00:07:16.674 "seek_data": false, 00:07:16.674 "copy": true, 00:07:16.674 "nvme_iov_md": false 00:07:16.674 }, 00:07:16.674 "memory_domains": [ 00:07:16.674 { 00:07:16.674 "dma_device_id": "system", 00:07:16.674 "dma_device_type": 1 00:07:16.674 }, 00:07:16.674 { 00:07:16.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.674 "dma_device_type": 2 00:07:16.674 } 00:07:16.674 ], 00:07:16.674 "driver_specific": {} 00:07:16.674 }, 00:07:16.675 { 00:07:16.675 "name": "Passthru0", 00:07:16.675 "aliases": [ 00:07:16.675 "93c47bcf-02a9-555a-9976-2d5665a3869f" 00:07:16.675 ], 00:07:16.675 "product_name": "passthru", 00:07:16.675 "block_size": 512, 00:07:16.675 "num_blocks": 16384, 00:07:16.675 "uuid": "93c47bcf-02a9-555a-9976-2d5665a3869f", 00:07:16.675 "assigned_rate_limits": { 00:07:16.675 "rw_ios_per_sec": 0, 00:07:16.675 "rw_mbytes_per_sec": 0, 00:07:16.675 "r_mbytes_per_sec": 0, 00:07:16.675 "w_mbytes_per_sec": 0 00:07:16.675 }, 00:07:16.675 "claimed": false, 00:07:16.675 "zoned": false, 00:07:16.675 "supported_io_types": { 00:07:16.675 "read": true, 00:07:16.675 "write": true, 00:07:16.675 "unmap": true, 00:07:16.675 "flush": true, 00:07:16.675 "reset": true, 00:07:16.675 "nvme_admin": false, 00:07:16.675 "nvme_io": false, 00:07:16.675 "nvme_io_md": false, 00:07:16.675 "write_zeroes": true, 00:07:16.675 "zcopy": true, 00:07:16.675 "get_zone_info": false, 00:07:16.675 "zone_management": false, 00:07:16.675 "zone_append": false, 00:07:16.675 "compare": false, 00:07:16.675 "compare_and_write": false, 00:07:16.675 "abort": true, 00:07:16.675 "seek_hole": false, 00:07:16.675 "seek_data": false, 00:07:16.675 "copy": true, 00:07:16.675 "nvme_iov_md": false 00:07:16.675 }, 00:07:16.675 "memory_domains": [ 00:07:16.675 { 00:07:16.675 "dma_device_id": "system", 00:07:16.675 "dma_device_type": 1 00:07:16.675 }, 00:07:16.675 { 00:07:16.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.675 "dma_device_type": 2 00:07:16.675 } 00:07:16.675 ], 00:07:16.675 "driver_specific": { 00:07:16.675 "passthru": { 00:07:16.675 "name": "Passthru0", 00:07:16.675 "base_bdev_name": "Malloc0" 00:07:16.675 } 00:07:16.675 } 00:07:16.675 } 00:07:16.675 ]' 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:16.675 12:13:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:16.675 00:07:16.675 real 0m0.239s 00:07:16.675 user 0m0.153s 00:07:16.675 sys 0m0.020s 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 ************************************ 00:07:16.675 END TEST rpc_integrity 00:07:16.675 ************************************ 00:07:16.675 12:13:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:16.675 12:13:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:16.675 12:13:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.675 12:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 ************************************ 00:07:16.675 START TEST rpc_plugins 00:07:16.675 ************************************ 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:16.675 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.675 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:16.675 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:16.675 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.675 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:16.675 { 00:07:16.675 "name": "Malloc1", 00:07:16.675 "aliases": [ 00:07:16.675 "48c8869a-1eae-4b26-bcbb-9d1270a254e0" 00:07:16.675 ], 00:07:16.675 "product_name": "Malloc disk", 00:07:16.675 "block_size": 4096, 00:07:16.675 "num_blocks": 256, 00:07:16.675 "uuid": "48c8869a-1eae-4b26-bcbb-9d1270a254e0", 00:07:16.675 "assigned_rate_limits": { 00:07:16.675 "rw_ios_per_sec": 0, 00:07:16.675 "rw_mbytes_per_sec": 0, 00:07:16.675 "r_mbytes_per_sec": 0, 00:07:16.675 "w_mbytes_per_sec": 0 00:07:16.675 }, 00:07:16.675 "claimed": false, 00:07:16.675 "zoned": false, 00:07:16.675 "supported_io_types": { 00:07:16.675 "read": true, 00:07:16.675 "write": true, 00:07:16.675 "unmap": true, 00:07:16.675 "flush": true, 00:07:16.675 "reset": true, 00:07:16.675 "nvme_admin": false, 00:07:16.675 "nvme_io": false, 00:07:16.675 "nvme_io_md": false, 00:07:16.675 "write_zeroes": true, 00:07:16.675 "zcopy": true, 00:07:16.675 "get_zone_info": false, 00:07:16.675 "zone_management": false, 00:07:16.675 "zone_append": false, 00:07:16.675 "compare": false, 00:07:16.675 "compare_and_write": false, 00:07:16.675 "abort": true, 00:07:16.675 "seek_hole": false, 00:07:16.675 "seek_data": false, 00:07:16.675 "copy": true, 00:07:16.675 "nvme_iov_md": false 00:07:16.675 }, 00:07:16.675 "memory_domains": [ 00:07:16.675 { 00:07:16.675 "dma_device_id": "system", 00:07:16.675 "dma_device_type": 1 00:07:16.675 }, 00:07:16.675 { 00:07:16.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.675 "dma_device_type": 2 00:07:16.675 } 00:07:16.675 ], 00:07:16.675 "driver_specific": {} 00:07:16.675 } 00:07:16.675 ]' 00:07:16.675 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:16.934 12:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:16.934 00:07:16.934 real 0m0.140s 00:07:16.934 user 0m0.089s 00:07:16.934 sys 0m0.015s 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.934 12:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 ************************************ 00:07:16.934 END TEST rpc_plugins 00:07:16.934 ************************************ 00:07:16.934 12:13:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:16.934 12:13:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:16.934 12:13:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.934 12:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 ************************************ 00:07:16.934 START TEST rpc_trace_cmd_test 00:07:16.934 ************************************ 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:16.934 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4160398", 00:07:16.934 "tpoint_group_mask": "0x8", 00:07:16.934 "iscsi_conn": { 00:07:16.934 "mask": "0x2", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "scsi": { 00:07:16.934 "mask": "0x4", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "bdev": { 00:07:16.934 "mask": "0x8", 00:07:16.934 "tpoint_mask": "0xffffffffffffffff" 00:07:16.934 }, 00:07:16.934 "nvmf_rdma": { 00:07:16.934 "mask": "0x10", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "nvmf_tcp": { 00:07:16.934 "mask": "0x20", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "ftl": { 00:07:16.934 "mask": "0x40", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "blobfs": { 00:07:16.934 "mask": "0x80", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "dsa": { 00:07:16.934 "mask": "0x200", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "thread": { 00:07:16.934 "mask": "0x400", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "nvme_pcie": { 00:07:16.934 "mask": "0x800", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "iaa": { 00:07:16.934 "mask": "0x1000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "nvme_tcp": { 00:07:16.934 "mask": "0x2000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "bdev_nvme": { 00:07:16.934 "mask": "0x4000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "sock": { 00:07:16.934 "mask": "0x8000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "blob": { 00:07:16.934 "mask": "0x10000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "bdev_raid": { 00:07:16.934 "mask": "0x20000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 }, 00:07:16.934 "scheduler": { 00:07:16.934 "mask": "0x40000", 00:07:16.934 "tpoint_mask": "0x0" 00:07:16.934 } 00:07:16.934 }' 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:16.934 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:17.193 00:07:17.193 real 0m0.216s 00:07:17.193 user 0m0.180s 00:07:17.193 sys 0m0.026s 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.193 12:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.193 ************************************ 00:07:17.193 END TEST rpc_trace_cmd_test 00:07:17.193 ************************************ 00:07:17.193 12:13:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:17.193 12:13:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:17.193 12:13:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:17.193 12:13:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.193 12:13:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.193 12:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.193 ************************************ 00:07:17.193 START TEST rpc_daemon_integrity 00:07:17.193 ************************************ 00:07:17.193 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:17.193 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:17.193 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.193 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.194 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.452 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:17.453 { 00:07:17.453 "name": "Malloc2", 00:07:17.453 "aliases": [ 00:07:17.453 "73bbe9f0-8bb3-4986-9983-f61a21ba471d" 00:07:17.453 ], 00:07:17.453 "product_name": "Malloc disk", 00:07:17.453 "block_size": 512, 00:07:17.453 "num_blocks": 16384, 00:07:17.453 "uuid": "73bbe9f0-8bb3-4986-9983-f61a21ba471d", 00:07:17.453 "assigned_rate_limits": { 00:07:17.453 "rw_ios_per_sec": 0, 00:07:17.453 "rw_mbytes_per_sec": 0, 00:07:17.453 "r_mbytes_per_sec": 0, 00:07:17.453 "w_mbytes_per_sec": 0 00:07:17.453 }, 00:07:17.453 "claimed": false, 00:07:17.453 "zoned": false, 00:07:17.453 "supported_io_types": { 00:07:17.453 "read": true, 00:07:17.453 "write": true, 00:07:17.453 "unmap": true, 00:07:17.453 "flush": true, 00:07:17.453 "reset": true, 00:07:17.453 "nvme_admin": false, 00:07:17.453 "nvme_io": false, 00:07:17.453 "nvme_io_md": false, 00:07:17.453 "write_zeroes": true, 00:07:17.453 "zcopy": true, 00:07:17.453 "get_zone_info": false, 00:07:17.453 "zone_management": false, 00:07:17.453 "zone_append": false, 00:07:17.453 "compare": false, 00:07:17.453 "compare_and_write": false, 00:07:17.453 "abort": true, 00:07:17.453 "seek_hole": false, 00:07:17.453 "seek_data": false, 00:07:17.453 "copy": true, 00:07:17.453 "nvme_iov_md": false 00:07:17.453 }, 00:07:17.453 "memory_domains": [ 00:07:17.453 { 00:07:17.453 "dma_device_id": "system", 00:07:17.453 "dma_device_type": 1 00:07:17.453 }, 00:07:17.453 { 00:07:17.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.453 "dma_device_type": 2 00:07:17.453 } 00:07:17.453 ], 00:07:17.453 "driver_specific": {} 00:07:17.453 } 00:07:17.453 ]' 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 [2024-11-06 12:13:48.859540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:17.453 [2024-11-06 12:13:48.859573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.453 [2024-11-06 12:13:48.859590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66c5a0 00:07:17.453 [2024-11-06 12:13:48.859600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.453 [2024-11-06 12:13:48.861145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.453 [2024-11-06 12:13:48.861170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:17.453 Passthru0 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:17.453 { 00:07:17.453 "name": "Malloc2", 00:07:17.453 "aliases": [ 00:07:17.453 "73bbe9f0-8bb3-4986-9983-f61a21ba471d" 00:07:17.453 ], 00:07:17.453 "product_name": "Malloc disk", 00:07:17.453 "block_size": 512, 00:07:17.453 "num_blocks": 16384, 00:07:17.453 "uuid": "73bbe9f0-8bb3-4986-9983-f61a21ba471d", 00:07:17.453 "assigned_rate_limits": { 00:07:17.453 "rw_ios_per_sec": 0, 00:07:17.453 "rw_mbytes_per_sec": 0, 00:07:17.453 "r_mbytes_per_sec": 0, 00:07:17.453 "w_mbytes_per_sec": 0 00:07:17.453 }, 00:07:17.453 "claimed": true, 00:07:17.453 "claim_type": "exclusive_write", 00:07:17.453 "zoned": false, 00:07:17.453 "supported_io_types": { 00:07:17.453 "read": true, 00:07:17.453 "write": true, 00:07:17.453 "unmap": true, 00:07:17.453 "flush": true, 00:07:17.453 "reset": true, 00:07:17.453 "nvme_admin": false, 00:07:17.453 "nvme_io": false, 00:07:17.453 "nvme_io_md": false, 00:07:17.453 "write_zeroes": true, 00:07:17.453 "zcopy": true, 00:07:17.453 "get_zone_info": false, 00:07:17.453 "zone_management": false, 00:07:17.453 "zone_append": false, 00:07:17.453 "compare": false, 00:07:17.453 "compare_and_write": false, 00:07:17.453 "abort": true, 00:07:17.453 "seek_hole": false, 00:07:17.453 "seek_data": false, 00:07:17.453 "copy": true, 00:07:17.453 "nvme_iov_md": false 00:07:17.453 }, 00:07:17.453 "memory_domains": [ 00:07:17.453 { 00:07:17.453 "dma_device_id": "system", 00:07:17.453 "dma_device_type": 1 00:07:17.453 }, 00:07:17.453 { 00:07:17.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.453 "dma_device_type": 2 00:07:17.453 } 00:07:17.453 ], 00:07:17.453 "driver_specific": {} 00:07:17.453 }, 00:07:17.453 { 00:07:17.453 "name": "Passthru0", 00:07:17.453 "aliases": [ 00:07:17.453 "37e9d18a-bb14-5ad7-bbbd-1c27311ca0f7" 00:07:17.453 ], 00:07:17.453 "product_name": "passthru", 00:07:17.453 "block_size": 512, 00:07:17.453 "num_blocks": 16384, 00:07:17.453 "uuid": "37e9d18a-bb14-5ad7-bbbd-1c27311ca0f7", 00:07:17.453 "assigned_rate_limits": { 00:07:17.453 "rw_ios_per_sec": 0, 00:07:17.453 "rw_mbytes_per_sec": 0, 00:07:17.453 "r_mbytes_per_sec": 0, 00:07:17.453 "w_mbytes_per_sec": 0 00:07:17.453 }, 00:07:17.453 "claimed": false, 00:07:17.453 "zoned": false, 00:07:17.453 "supported_io_types": { 00:07:17.453 "read": true, 00:07:17.453 "write": true, 00:07:17.453 "unmap": true, 00:07:17.453 "flush": true, 00:07:17.453 "reset": true, 00:07:17.453 "nvme_admin": false, 00:07:17.453 "nvme_io": false, 00:07:17.453 "nvme_io_md": false, 00:07:17.453 "write_zeroes": true, 00:07:17.453 "zcopy": true, 00:07:17.453 "get_zone_info": false, 00:07:17.453 "zone_management": false, 00:07:17.453 "zone_append": false, 00:07:17.453 "compare": false, 00:07:17.453 "compare_and_write": false, 00:07:17.453 "abort": true, 00:07:17.453 "seek_hole": false, 00:07:17.453 "seek_data": false, 00:07:17.453 "copy": true, 00:07:17.453 "nvme_iov_md": false 00:07:17.453 }, 00:07:17.453 "memory_domains": [ 00:07:17.453 { 00:07:17.453 "dma_device_id": "system", 00:07:17.453 "dma_device_type": 1 00:07:17.453 }, 00:07:17.453 { 00:07:17.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.453 "dma_device_type": 2 00:07:17.453 } 00:07:17.453 ], 00:07:17.453 "driver_specific": { 00:07:17.453 "passthru": { 00:07:17.453 "name": "Passthru0", 00:07:17.453 "base_bdev_name": "Malloc2" 00:07:17.453 } 00:07:17.453 } 00:07:17.453 } 00:07:17.453 ]' 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:17.453 00:07:17.453 real 0m0.255s 00:07:17.453 user 0m0.176s 00:07:17.453 sys 0m0.022s 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.453 12:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.453 ************************************ 00:07:17.453 END TEST rpc_daemon_integrity 00:07:17.453 ************************************ 00:07:17.453 12:13:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:17.453 12:13:49 rpc -- rpc/rpc.sh@84 -- # killprocess 4160398 00:07:17.453 12:13:49 rpc -- common/autotest_common.sh@952 -- # '[' -z 4160398 ']' 00:07:17.453 12:13:49 rpc -- common/autotest_common.sh@956 -- # kill -0 4160398 00:07:17.453 12:13:49 rpc -- common/autotest_common.sh@957 -- # uname 00:07:17.453 12:13:49 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.453 12:13:49 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4160398 00:07:17.713 12:13:49 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:17.713 12:13:49 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:17.713 12:13:49 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4160398' 00:07:17.713 killing process with pid 4160398 00:07:17.713 12:13:49 rpc -- common/autotest_common.sh@971 -- # kill 4160398 00:07:17.713 12:13:49 rpc -- common/autotest_common.sh@976 -- # wait 4160398 00:07:17.972 00:07:17.972 real 0m2.111s 00:07:17.972 user 0m2.712s 00:07:17.972 sys 0m0.673s 00:07:17.972 12:13:49 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.972 12:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.972 ************************************ 00:07:17.972 END TEST rpc 00:07:17.972 ************************************ 00:07:17.972 12:13:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:17.972 12:13:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.972 12:13:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.972 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:07:17.972 ************************************ 00:07:17.972 START TEST skip_rpc 00:07:17.972 ************************************ 00:07:17.972 12:13:49 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:17.972 * Looking for test storage... 00:07:17.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:17.972 12:13:49 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:17.972 12:13:49 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:17.972 12:13:49 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.231 12:13:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.231 --rc genhtml_branch_coverage=1 00:07:18.231 --rc genhtml_function_coverage=1 00:07:18.231 --rc genhtml_legend=1 00:07:18.231 --rc geninfo_all_blocks=1 00:07:18.231 --rc geninfo_unexecuted_blocks=1 00:07:18.231 00:07:18.231 ' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.231 --rc genhtml_branch_coverage=1 00:07:18.231 --rc genhtml_function_coverage=1 00:07:18.231 --rc genhtml_legend=1 00:07:18.231 --rc geninfo_all_blocks=1 00:07:18.231 --rc geninfo_unexecuted_blocks=1 00:07:18.231 00:07:18.231 ' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.231 --rc genhtml_branch_coverage=1 00:07:18.231 --rc genhtml_function_coverage=1 00:07:18.231 --rc genhtml_legend=1 00:07:18.231 --rc geninfo_all_blocks=1 00:07:18.231 --rc geninfo_unexecuted_blocks=1 00:07:18.231 00:07:18.231 ' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.231 --rc genhtml_branch_coverage=1 00:07:18.231 --rc genhtml_function_coverage=1 00:07:18.231 --rc genhtml_legend=1 00:07:18.231 --rc geninfo_all_blocks=1 00:07:18.231 --rc geninfo_unexecuted_blocks=1 00:07:18.231 00:07:18.231 ' 00:07:18.231 12:13:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:18.231 12:13:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:18.231 12:13:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.231 12:13:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.231 ************************************ 00:07:18.231 START TEST skip_rpc 00:07:18.231 ************************************ 00:07:18.231 12:13:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:18.231 12:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4161097 00:07:18.231 12:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:18.231 12:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.231 12:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:18.231 [2024-11-06 12:13:49.757187] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:18.231 [2024-11-06 12:13:49.757243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161097 ] 00:07:18.490 [2024-11-06 12:13:49.850528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.490 [2024-11-06 12:13:49.899115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4161097 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 4161097 ']' 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 4161097 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4161097 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4161097' 00:07:23.760 killing process with pid 4161097 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 4161097 00:07:23.760 12:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 4161097 00:07:23.760 00:07:23.760 real 0m5.401s 00:07:23.760 user 0m5.144s 00:07:23.760 sys 0m0.301s 00:07:23.760 12:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.760 12:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 ************************************ 00:07:23.760 END TEST skip_rpc 00:07:23.760 ************************************ 00:07:23.760 12:13:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:23.760 12:13:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.760 12:13:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.760 12:13:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 ************************************ 00:07:23.760 START TEST skip_rpc_with_json 00:07:23.760 ************************************ 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4162106 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4162106 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 4162106 ']' 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.760 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 [2024-11-06 12:13:55.225488] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:23.760 [2024-11-06 12:13:55.225528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162106 ] 00:07:23.760 [2024-11-06 12:13:55.306275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.760 [2024-11-06 12:13:55.354495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.019 [2024-11-06 12:13:55.580100] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:24.019 request: 00:07:24.019 { 00:07:24.019 "trtype": "tcp", 00:07:24.019 "method": "nvmf_get_transports", 00:07:24.019 "req_id": 1 00:07:24.019 } 00:07:24.019 Got JSON-RPC error response 00:07:24.019 response: 00:07:24.019 { 00:07:24.019 "code": -19, 00:07:24.019 "message": "No such device" 00:07:24.019 } 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.019 [2024-11-06 12:13:55.592243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.019 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.278 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.278 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:24.278 { 00:07:24.278 "subsystems": [ 00:07:24.278 { 00:07:24.278 "subsystem": "fsdev", 00:07:24.278 "config": [ 00:07:24.278 { 00:07:24.278 "method": "fsdev_set_opts", 00:07:24.278 "params": { 00:07:24.278 "fsdev_io_pool_size": 65535, 00:07:24.278 "fsdev_io_cache_size": 256 00:07:24.278 } 00:07:24.278 } 00:07:24.278 ] 00:07:24.278 }, 00:07:24.278 { 00:07:24.278 "subsystem": "vfio_user_target", 00:07:24.278 "config": null 00:07:24.278 }, 00:07:24.278 { 00:07:24.278 "subsystem": "keyring", 00:07:24.278 "config": [] 00:07:24.278 }, 00:07:24.278 { 00:07:24.278 "subsystem": "iobuf", 00:07:24.278 "config": [ 00:07:24.278 { 00:07:24.278 "method": "iobuf_set_options", 00:07:24.278 "params": { 00:07:24.278 "small_pool_count": 8192, 00:07:24.278 "large_pool_count": 1024, 00:07:24.278 "small_bufsize": 8192, 00:07:24.278 "large_bufsize": 135168, 00:07:24.278 "enable_numa": false 00:07:24.278 } 00:07:24.278 } 00:07:24.278 ] 00:07:24.278 }, 00:07:24.278 { 00:07:24.278 "subsystem": "sock", 00:07:24.278 "config": [ 00:07:24.278 { 00:07:24.278 "method": "sock_set_default_impl", 00:07:24.278 "params": { 00:07:24.278 "impl_name": "posix" 00:07:24.278 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "sock_impl_set_options", 00:07:24.279 "params": { 00:07:24.279 "impl_name": "ssl", 00:07:24.279 "recv_buf_size": 4096, 00:07:24.279 "send_buf_size": 4096, 00:07:24.279 "enable_recv_pipe": true, 00:07:24.279 "enable_quickack": false, 00:07:24.279 "enable_placement_id": 0, 00:07:24.279 "enable_zerocopy_send_server": true, 00:07:24.279 "enable_zerocopy_send_client": false, 00:07:24.279 "zerocopy_threshold": 0, 00:07:24.279 "tls_version": 0, 00:07:24.279 "enable_ktls": false 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "sock_impl_set_options", 00:07:24.279 "params": { 00:07:24.279 "impl_name": "posix", 00:07:24.279 "recv_buf_size": 2097152, 00:07:24.279 "send_buf_size": 2097152, 00:07:24.279 "enable_recv_pipe": true, 00:07:24.279 "enable_quickack": false, 00:07:24.279 "enable_placement_id": 0, 00:07:24.279 "enable_zerocopy_send_server": true, 00:07:24.279 "enable_zerocopy_send_client": false, 00:07:24.279 "zerocopy_threshold": 0, 00:07:24.279 "tls_version": 0, 00:07:24.279 "enable_ktls": false 00:07:24.279 } 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "vmd", 00:07:24.279 "config": [] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "accel", 00:07:24.279 "config": [ 00:07:24.279 { 00:07:24.279 "method": "accel_set_options", 00:07:24.279 "params": { 00:07:24.279 "small_cache_size": 128, 00:07:24.279 "large_cache_size": 16, 00:07:24.279 "task_count": 2048, 00:07:24.279 "sequence_count": 2048, 00:07:24.279 "buf_count": 2048 00:07:24.279 } 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "bdev", 00:07:24.279 "config": [ 00:07:24.279 { 00:07:24.279 "method": "bdev_set_options", 00:07:24.279 "params": { 00:07:24.279 "bdev_io_pool_size": 65535, 00:07:24.279 "bdev_io_cache_size": 256, 00:07:24.279 "bdev_auto_examine": true, 00:07:24.279 "iobuf_small_cache_size": 128, 00:07:24.279 "iobuf_large_cache_size": 16 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "bdev_raid_set_options", 00:07:24.279 "params": { 00:07:24.279 "process_window_size_kb": 1024, 00:07:24.279 "process_max_bandwidth_mb_sec": 0 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "bdev_iscsi_set_options", 00:07:24.279 "params": { 00:07:24.279 "timeout_sec": 30 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "bdev_nvme_set_options", 00:07:24.279 "params": { 00:07:24.279 "action_on_timeout": "none", 00:07:24.279 "timeout_us": 0, 00:07:24.279 "timeout_admin_us": 0, 00:07:24.279 "keep_alive_timeout_ms": 10000, 00:07:24.279 "arbitration_burst": 0, 00:07:24.279 "low_priority_weight": 0, 00:07:24.279 "medium_priority_weight": 0, 00:07:24.279 "high_priority_weight": 0, 00:07:24.279 "nvme_adminq_poll_period_us": 10000, 00:07:24.279 "nvme_ioq_poll_period_us": 0, 00:07:24.279 "io_queue_requests": 0, 00:07:24.279 "delay_cmd_submit": true, 00:07:24.279 "transport_retry_count": 4, 00:07:24.279 "bdev_retry_count": 3, 00:07:24.279 "transport_ack_timeout": 0, 00:07:24.279 "ctrlr_loss_timeout_sec": 0, 00:07:24.279 "reconnect_delay_sec": 0, 00:07:24.279 "fast_io_fail_timeout_sec": 0, 00:07:24.279 "disable_auto_failback": false, 00:07:24.279 "generate_uuids": false, 00:07:24.279 "transport_tos": 0, 00:07:24.279 "nvme_error_stat": false, 00:07:24.279 "rdma_srq_size": 0, 00:07:24.279 "io_path_stat": false, 00:07:24.279 "allow_accel_sequence": false, 00:07:24.279 "rdma_max_cq_size": 0, 00:07:24.279 "rdma_cm_event_timeout_ms": 0, 00:07:24.279 "dhchap_digests": [ 00:07:24.279 "sha256", 00:07:24.279 "sha384", 00:07:24.279 "sha512" 00:07:24.279 ], 00:07:24.279 "dhchap_dhgroups": [ 00:07:24.279 "null", 00:07:24.279 "ffdhe2048", 00:07:24.279 "ffdhe3072", 00:07:24.279 "ffdhe4096", 00:07:24.279 "ffdhe6144", 00:07:24.279 "ffdhe8192" 00:07:24.279 ] 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "bdev_nvme_set_hotplug", 00:07:24.279 "params": { 00:07:24.279 "period_us": 100000, 00:07:24.279 "enable": false 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "bdev_wait_for_examine" 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "scsi", 00:07:24.279 "config": null 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "scheduler", 00:07:24.279 "config": [ 00:07:24.279 { 00:07:24.279 "method": "framework_set_scheduler", 00:07:24.279 "params": { 00:07:24.279 "name": "static" 00:07:24.279 } 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "vhost_scsi", 00:07:24.279 "config": [] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "vhost_blk", 00:07:24.279 "config": [] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "ublk", 00:07:24.279 "config": [] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "nbd", 00:07:24.279 "config": [] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "nvmf", 00:07:24.279 "config": [ 00:07:24.279 { 00:07:24.279 "method": "nvmf_set_config", 00:07:24.279 "params": { 00:07:24.279 "discovery_filter": "match_any", 00:07:24.279 "admin_cmd_passthru": { 00:07:24.279 "identify_ctrlr": false 00:07:24.279 }, 00:07:24.279 "dhchap_digests": [ 00:07:24.279 "sha256", 00:07:24.279 "sha384", 00:07:24.279 "sha512" 00:07:24.279 ], 00:07:24.279 "dhchap_dhgroups": [ 00:07:24.279 "null", 00:07:24.279 "ffdhe2048", 00:07:24.279 "ffdhe3072", 00:07:24.279 "ffdhe4096", 00:07:24.279 "ffdhe6144", 00:07:24.279 "ffdhe8192" 00:07:24.279 ] 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "nvmf_set_max_subsystems", 00:07:24.279 "params": { 00:07:24.279 "max_subsystems": 1024 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "nvmf_set_crdt", 00:07:24.279 "params": { 00:07:24.279 "crdt1": 0, 00:07:24.279 "crdt2": 0, 00:07:24.279 "crdt3": 0 00:07:24.279 } 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "method": "nvmf_create_transport", 00:07:24.279 "params": { 00:07:24.279 "trtype": "TCP", 00:07:24.279 "max_queue_depth": 128, 00:07:24.279 "max_io_qpairs_per_ctrlr": 127, 00:07:24.279 "in_capsule_data_size": 4096, 00:07:24.279 "max_io_size": 131072, 00:07:24.279 "io_unit_size": 131072, 00:07:24.279 "max_aq_depth": 128, 00:07:24.279 "num_shared_buffers": 511, 00:07:24.279 "buf_cache_size": 4294967295, 00:07:24.279 "dif_insert_or_strip": false, 00:07:24.279 "zcopy": false, 00:07:24.279 "c2h_success": true, 00:07:24.279 "sock_priority": 0, 00:07:24.279 "abort_timeout_sec": 1, 00:07:24.279 "ack_timeout": 0, 00:07:24.279 "data_wr_pool_size": 0 00:07:24.279 } 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 }, 00:07:24.279 { 00:07:24.279 "subsystem": "iscsi", 00:07:24.279 "config": [ 00:07:24.279 { 00:07:24.279 "method": "iscsi_set_options", 00:07:24.279 "params": { 00:07:24.279 "node_base": "iqn.2016-06.io.spdk", 00:07:24.279 "max_sessions": 128, 00:07:24.279 "max_connections_per_session": 2, 00:07:24.279 "max_queue_depth": 64, 00:07:24.279 "default_time2wait": 2, 00:07:24.279 "default_time2retain": 20, 00:07:24.279 "first_burst_length": 8192, 00:07:24.279 "immediate_data": true, 00:07:24.279 "allow_duplicated_isid": false, 00:07:24.279 "error_recovery_level": 0, 00:07:24.279 "nop_timeout": 60, 00:07:24.279 "nop_in_interval": 30, 00:07:24.279 "disable_chap": false, 00:07:24.279 "require_chap": false, 00:07:24.279 "mutual_chap": false, 00:07:24.279 "chap_group": 0, 00:07:24.279 "max_large_datain_per_connection": 64, 00:07:24.279 "max_r2t_per_connection": 4, 00:07:24.279 "pdu_pool_size": 36864, 00:07:24.279 "immediate_data_pool_size": 16384, 00:07:24.279 "data_out_pool_size": 2048 00:07:24.279 } 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 } 00:07:24.279 ] 00:07:24.279 } 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4162106 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 4162106 ']' 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 4162106 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:24.279 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4162106 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4162106' 00:07:24.280 killing process with pid 4162106 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 4162106 00:07:24.280 12:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 4162106 00:07:24.538 12:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4162190 00:07:24.538 12:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:24.538 12:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4162190 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 4162190 ']' 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 4162190 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4162190 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4162190' 00:07:29.807 killing process with pid 4162190 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 4162190 00:07:29.807 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 4162190 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:30.067 00:07:30.067 real 0m6.382s 00:07:30.067 user 0m6.099s 00:07:30.067 sys 0m0.668s 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 ************************************ 00:07:30.067 END TEST skip_rpc_with_json 00:07:30.067 ************************************ 00:07:30.067 12:14:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:30.067 12:14:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.067 12:14:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.067 12:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 ************************************ 00:07:30.067 START TEST skip_rpc_with_delay 00:07:30.067 ************************************ 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:30.067 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.326 [2024-11-06 12:14:01.690436] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:30.326 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:30.326 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.326 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.326 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.326 00:07:30.326 real 0m0.082s 00:07:30.327 user 0m0.043s 00:07:30.327 sys 0m0.038s 00:07:30.327 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.327 12:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:30.327 ************************************ 00:07:30.327 END TEST skip_rpc_with_delay 00:07:30.327 ************************************ 00:07:30.327 12:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:30.327 12:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:30.327 12:14:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:30.327 12:14:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.327 12:14:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.327 12:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.327 ************************************ 00:07:30.327 START TEST exit_on_failed_rpc_init 00:07:30.327 ************************************ 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4163294 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4163294 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 4163294 ']' 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.327 12:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:30.327 [2024-11-06 12:14:01.844083] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:30.327 [2024-11-06 12:14:01.844143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163294 ] 00:07:30.327 [2024-11-06 12:14:01.938261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.585 [2024-11-06 12:14:01.985522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:31.153 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.412 [2024-11-06 12:14:02.795486] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:31.412 [2024-11-06 12:14:02.795548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163561 ] 00:07:31.412 [2024-11-06 12:14:02.862743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.412 [2024-11-06 12:14:02.900973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.412 [2024-11-06 12:14:02.901029] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:31.412 [2024-11-06 12:14:02.901038] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:31.412 [2024-11-06 12:14:02.901044] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4163294 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 4163294 ']' 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 4163294 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.412 12:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4163294 00:07:31.412 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.412 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.412 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4163294' 00:07:31.412 killing process with pid 4163294 00:07:31.412 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 4163294 00:07:31.412 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 4163294 00:07:31.979 00:07:31.979 real 0m1.544s 00:07:31.979 user 0m1.765s 00:07:31.979 sys 0m0.464s 00:07:31.979 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.979 12:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:31.979 ************************************ 00:07:31.979 END TEST exit_on_failed_rpc_init 00:07:31.979 ************************************ 00:07:31.979 12:14:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:31.979 00:07:31.979 real 0m13.895s 00:07:31.979 user 0m13.266s 00:07:31.979 sys 0m1.772s 00:07:31.979 12:14:03 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.979 12:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.979 ************************************ 00:07:31.979 END TEST skip_rpc 00:07:31.979 ************************************ 00:07:31.979 12:14:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:31.979 12:14:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.979 12:14:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.979 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:31.979 ************************************ 00:07:31.979 START TEST rpc_client 00:07:31.979 ************************************ 00:07:31.979 12:14:03 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:31.979 * Looking for test storage... 00:07:31.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:31.980 12:14:03 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.980 12:14:03 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.980 12:14:03 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.980 12:14:03 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.980 12:14:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.239 12:14:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.239 --rc genhtml_branch_coverage=1 00:07:32.239 --rc genhtml_function_coverage=1 00:07:32.239 --rc genhtml_legend=1 00:07:32.239 --rc geninfo_all_blocks=1 00:07:32.239 --rc geninfo_unexecuted_blocks=1 00:07:32.239 00:07:32.239 ' 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.239 --rc genhtml_branch_coverage=1 00:07:32.239 --rc genhtml_function_coverage=1 00:07:32.239 --rc genhtml_legend=1 00:07:32.239 --rc geninfo_all_blocks=1 00:07:32.239 --rc geninfo_unexecuted_blocks=1 00:07:32.239 00:07:32.239 ' 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.239 --rc genhtml_branch_coverage=1 00:07:32.239 --rc genhtml_function_coverage=1 00:07:32.239 --rc genhtml_legend=1 00:07:32.239 --rc geninfo_all_blocks=1 00:07:32.239 --rc geninfo_unexecuted_blocks=1 00:07:32.239 00:07:32.239 ' 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.239 --rc genhtml_branch_coverage=1 00:07:32.239 --rc genhtml_function_coverage=1 00:07:32.239 --rc genhtml_legend=1 00:07:32.239 --rc geninfo_all_blocks=1 00:07:32.239 --rc geninfo_unexecuted_blocks=1 00:07:32.239 00:07:32.239 ' 00:07:32.239 12:14:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:32.239 OK 00:07:32.239 12:14:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:32.239 00:07:32.239 real 0m0.195s 00:07:32.239 user 0m0.107s 00:07:32.239 sys 0m0.100s 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.239 12:14:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:32.239 ************************************ 00:07:32.239 END TEST rpc_client 00:07:32.239 ************************************ 00:07:32.239 12:14:03 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:32.239 12:14:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.239 12:14:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.239 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:32.239 ************************************ 00:07:32.239 START TEST json_config 00:07:32.239 ************************************ 00:07:32.239 12:14:03 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:32.239 12:14:03 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.239 12:14:03 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.239 12:14:03 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.239 12:14:03 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.239 12:14:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.239 12:14:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.239 12:14:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.239 12:14:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.239 12:14:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.239 12:14:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.239 12:14:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.239 12:14:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.239 12:14:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.239 12:14:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.239 12:14:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.239 12:14:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:32.239 12:14:03 json_config -- scripts/common.sh@345 -- # : 1 00:07:32.239 12:14:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.239 12:14:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.498 12:14:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:32.498 12:14:03 json_config -- scripts/common.sh@353 -- # local d=1 00:07:32.498 12:14:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.498 12:14:03 json_config -- scripts/common.sh@355 -- # echo 1 00:07:32.498 12:14:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.498 12:14:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:32.498 12:14:03 json_config -- scripts/common.sh@353 -- # local d=2 00:07:32.499 12:14:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.499 12:14:03 json_config -- scripts/common.sh@355 -- # echo 2 00:07:32.499 12:14:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.499 12:14:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.499 12:14:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.499 12:14:03 json_config -- scripts/common.sh@368 -- # return 0 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.499 --rc genhtml_branch_coverage=1 00:07:32.499 --rc genhtml_function_coverage=1 00:07:32.499 --rc genhtml_legend=1 00:07:32.499 --rc geninfo_all_blocks=1 00:07:32.499 --rc geninfo_unexecuted_blocks=1 00:07:32.499 00:07:32.499 ' 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.499 --rc genhtml_branch_coverage=1 00:07:32.499 --rc genhtml_function_coverage=1 00:07:32.499 --rc genhtml_legend=1 00:07:32.499 --rc geninfo_all_blocks=1 00:07:32.499 --rc geninfo_unexecuted_blocks=1 00:07:32.499 00:07:32.499 ' 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.499 --rc genhtml_branch_coverage=1 00:07:32.499 --rc genhtml_function_coverage=1 00:07:32.499 --rc genhtml_legend=1 00:07:32.499 --rc geninfo_all_blocks=1 00:07:32.499 --rc geninfo_unexecuted_blocks=1 00:07:32.499 00:07:32.499 ' 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.499 --rc genhtml_branch_coverage=1 00:07:32.499 --rc genhtml_function_coverage=1 00:07:32.499 --rc genhtml_legend=1 00:07:32.499 --rc geninfo_all_blocks=1 00:07:32.499 --rc geninfo_unexecuted_blocks=1 00:07:32.499 00:07:32.499 ' 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.499 12:14:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.499 12:14:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.499 12:14:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.499 12:14:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.499 12:14:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.499 12:14:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.499 12:14:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.499 12:14:03 json_config -- paths/export.sh@5 -- # export PATH 00:07:32.499 12:14:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@51 -- # : 0 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.499 12:14:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:32.499 INFO: JSON configuration test init 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.499 12:14:03 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:32.499 12:14:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.500 12:14:03 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:32.500 12:14:03 json_config -- json_config/common.sh@9 -- # local app=target 00:07:32.500 12:14:03 json_config -- json_config/common.sh@10 -- # shift 00:07:32.500 12:14:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:32.500 12:14:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:32.500 12:14:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:32.500 12:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.500 12:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.500 12:14:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4163945 00:07:32.500 12:14:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:32.500 Waiting for target to run... 00:07:32.500 12:14:03 json_config -- json_config/common.sh@25 -- # waitforlisten 4163945 /var/tmp/spdk_tgt.sock 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@833 -- # '[' -z 4163945 ']' 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.500 12:14:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:32.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.500 12:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.500 [2024-11-06 12:14:03.974506] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:32.500 [2024-11-06 12:14:03.974571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163945 ] 00:07:33.067 [2024-11-06 12:14:04.447655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.067 [2024-11-06 12:14:04.515821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:33.633 12:14:04 json_config -- json_config/common.sh@26 -- # echo '' 00:07:33.633 00:07:33.633 12:14:04 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:33.633 12:14:04 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.633 12:14:04 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:33.633 12:14:04 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.633 12:14:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.633 12:14:05 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:33.633 12:14:05 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:33.633 12:14:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:36.920 12:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@54 -- # sort 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.920 12:14:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:36.920 12:14:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:36.920 12:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:37.179 MallocForNvmf0 00:07:37.179 12:14:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.179 12:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.438 MallocForNvmf1 00:07:37.438 12:14:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.438 12:14:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.697 [2024-11-06 12:14:09.220559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.697 12:14:09 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.697 12:14:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.955 12:14:09 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:37.955 12:14:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:38.213 12:14:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:38.213 12:14:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:38.471 12:14:10 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.471 12:14:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.729 [2024-11-06 12:14:10.300081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:38.729 12:14:10 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:38.729 12:14:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.729 12:14:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 12:14:10 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:38.988 12:14:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.988 12:14:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 12:14:10 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:38.988 12:14:10 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:38.988 12:14:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:39.247 MallocBdevForConfigChangeCheck 00:07:39.247 12:14:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:39.247 12:14:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.247 12:14:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.247 12:14:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:39.247 12:14:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:39.506 12:14:10 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:39.506 INFO: shutting down applications... 00:07:39.506 12:14:10 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:39.506 12:14:10 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:39.506 12:14:10 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:39.506 12:14:10 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:41.408 Calling clear_iscsi_subsystem 00:07:41.408 Calling clear_nvmf_subsystem 00:07:41.408 Calling clear_nbd_subsystem 00:07:41.408 Calling clear_ublk_subsystem 00:07:41.408 Calling clear_vhost_blk_subsystem 00:07:41.408 Calling clear_vhost_scsi_subsystem 00:07:41.408 Calling clear_bdev_subsystem 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:41.408 12:14:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:41.667 12:14:13 json_config -- json_config/json_config.sh@352 -- # break 00:07:41.667 12:14:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:41.667 12:14:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:41.667 12:14:13 json_config -- json_config/common.sh@31 -- # local app=target 00:07:41.667 12:14:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:41.667 12:14:13 json_config -- json_config/common.sh@35 -- # [[ -n 4163945 ]] 00:07:41.667 12:14:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4163945 00:07:41.667 12:14:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:41.667 12:14:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.667 12:14:13 json_config -- json_config/common.sh@41 -- # kill -0 4163945 00:07:41.667 12:14:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.236 12:14:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.236 12:14:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.236 12:14:13 json_config -- json_config/common.sh@41 -- # kill -0 4163945 00:07:42.236 12:14:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:42.236 12:14:13 json_config -- json_config/common.sh@43 -- # break 00:07:42.236 12:14:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:42.236 12:14:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:42.236 SPDK target shutdown done 00:07:42.236 12:14:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:42.236 INFO: relaunching applications... 00:07:42.236 12:14:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:42.236 12:14:13 json_config -- json_config/common.sh@9 -- # local app=target 00:07:42.236 12:14:13 json_config -- json_config/common.sh@10 -- # shift 00:07:42.236 12:14:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:42.236 12:14:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:42.236 12:14:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:42.236 12:14:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:42.236 12:14:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:42.236 12:14:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4165792 00:07:42.236 12:14:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:42.236 Waiting for target to run... 00:07:42.236 12:14:13 json_config -- json_config/common.sh@25 -- # waitforlisten 4165792 /var/tmp/spdk_tgt.sock 00:07:42.236 12:14:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@833 -- # '[' -z 4165792 ']' 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:42.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.236 12:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.236 [2024-11-06 12:14:13.693286] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:42.236 [2024-11-06 12:14:13.693358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165792 ] 00:07:42.804 [2024-11-06 12:14:14.156114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.804 [2024-11-06 12:14:14.222372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.091 [2024-11-06 12:14:17.293928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.091 [2024-11-06 12:14:17.326316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:46.668 12:14:18 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.668 12:14:18 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:46.668 12:14:18 json_config -- json_config/common.sh@26 -- # echo '' 00:07:46.668 00:07:46.668 12:14:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:46.668 12:14:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:46.668 INFO: Checking if target configuration is the same... 00:07:46.668 12:14:18 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:46.668 12:14:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:46.668 12:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:46.668 + '[' 2 -ne 2 ']' 00:07:46.668 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:46.668 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:46.668 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.668 +++ basename /dev/fd/62 00:07:46.668 ++ mktemp /tmp/62.XXX 00:07:46.668 + tmp_file_1=/tmp/62.xpi 00:07:46.668 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:46.668 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:46.668 + tmp_file_2=/tmp/spdk_tgt_config.json.XpZ 00:07:46.668 + ret=0 00:07:46.668 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.927 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:47.186 + diff -u /tmp/62.xpi /tmp/spdk_tgt_config.json.XpZ 00:07:47.186 + echo 'INFO: JSON config files are the same' 00:07:47.186 INFO: JSON config files are the same 00:07:47.186 + rm /tmp/62.xpi /tmp/spdk_tgt_config.json.XpZ 00:07:47.186 + exit 0 00:07:47.186 12:14:18 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:47.186 12:14:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:47.186 INFO: changing configuration and checking if this can be detected... 00:07:47.186 12:14:18 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:47.186 12:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:47.445 12:14:18 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:47.445 12:14:18 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:47.445 12:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.445 + '[' 2 -ne 2 ']' 00:07:47.445 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:47.445 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:47.445 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:47.445 +++ basename /dev/fd/62 00:07:47.445 ++ mktemp /tmp/62.XXX 00:07:47.445 + tmp_file_1=/tmp/62.bKq 00:07:47.445 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:47.445 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:47.445 + tmp_file_2=/tmp/spdk_tgt_config.json.9qN 00:07:47.445 + ret=0 00:07:47.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:47.704 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:47.963 + diff -u /tmp/62.bKq /tmp/spdk_tgt_config.json.9qN 00:07:47.963 + ret=1 00:07:47.963 + echo '=== Start of file: /tmp/62.bKq ===' 00:07:47.963 + cat /tmp/62.bKq 00:07:47.963 + echo '=== End of file: /tmp/62.bKq ===' 00:07:47.963 + echo '' 00:07:47.963 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9qN ===' 00:07:47.963 + cat /tmp/spdk_tgt_config.json.9qN 00:07:47.963 + echo '=== End of file: /tmp/spdk_tgt_config.json.9qN ===' 00:07:47.963 + echo '' 00:07:47.963 + rm /tmp/62.bKq /tmp/spdk_tgt_config.json.9qN 00:07:47.963 + exit 1 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:47.963 INFO: configuration change detected. 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 4165792 ]] 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:47.963 12:14:19 json_config -- json_config/json_config.sh@330 -- # killprocess 4165792 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@952 -- # '[' -z 4165792 ']' 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@956 -- # kill -0 4165792 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@957 -- # uname 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4165792 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4165792' 00:07:47.963 killing process with pid 4165792 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@971 -- # kill 4165792 00:07:47.963 12:14:19 json_config -- common/autotest_common.sh@976 -- # wait 4165792 00:07:49.867 12:14:20 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:49.867 12:14:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:49.867 12:14:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.867 12:14:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.867 12:14:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:49.867 12:14:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:49.867 INFO: Success 00:07:49.867 00:07:49.867 real 0m17.336s 00:07:49.867 user 0m18.837s 00:07:49.867 sys 0m2.960s 00:07:49.867 12:14:21 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.867 12:14:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.867 ************************************ 00:07:49.867 END TEST json_config 00:07:49.867 ************************************ 00:07:49.867 12:14:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:49.867 12:14:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.867 12:14:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.867 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:07:49.867 ************************************ 00:07:49.867 START TEST json_config_extra_key 00:07:49.867 ************************************ 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.867 --rc genhtml_branch_coverage=1 00:07:49.867 --rc genhtml_function_coverage=1 00:07:49.867 --rc genhtml_legend=1 00:07:49.867 --rc geninfo_all_blocks=1 00:07:49.867 --rc geninfo_unexecuted_blocks=1 00:07:49.867 00:07:49.867 ' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.867 --rc genhtml_branch_coverage=1 00:07:49.867 --rc genhtml_function_coverage=1 00:07:49.867 --rc genhtml_legend=1 00:07:49.867 --rc geninfo_all_blocks=1 00:07:49.867 --rc geninfo_unexecuted_blocks=1 00:07:49.867 00:07:49.867 ' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.867 --rc genhtml_branch_coverage=1 00:07:49.867 --rc genhtml_function_coverage=1 00:07:49.867 --rc genhtml_legend=1 00:07:49.867 --rc geninfo_all_blocks=1 00:07:49.867 --rc geninfo_unexecuted_blocks=1 00:07:49.867 00:07:49.867 ' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.867 --rc genhtml_branch_coverage=1 00:07:49.867 --rc genhtml_function_coverage=1 00:07:49.867 --rc genhtml_legend=1 00:07:49.867 --rc geninfo_all_blocks=1 00:07:49.867 --rc geninfo_unexecuted_blocks=1 00:07:49.867 00:07:49.867 ' 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.867 12:14:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.867 12:14:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.867 12:14:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.867 12:14:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.867 12:14:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:49.867 12:14:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.867 12:14:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:49.867 INFO: launching applications... 00:07:49.867 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4167354 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:49.867 Waiting for target to run... 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4167354 /var/tmp/spdk_tgt.sock 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 4167354 ']' 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.867 12:14:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:49.867 12:14:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:49.867 [2024-11-06 12:14:21.336274] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:49.867 [2024-11-06 12:14:21.336337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167354 ] 00:07:50.126 [2024-11-06 12:14:21.661004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.126 [2024-11-06 12:14:21.701964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.693 12:14:22 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.693 12:14:22 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:50.693 00:07:50.693 12:14:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:50.693 INFO: shutting down applications... 00:07:50.693 12:14:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4167354 ]] 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4167354 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4167354 00:07:50.693 12:14:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4167354 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:51.261 12:14:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:51.261 SPDK target shutdown done 00:07:51.261 12:14:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:51.261 Success 00:07:51.261 00:07:51.261 real 0m1.493s 00:07:51.261 user 0m1.251s 00:07:51.261 sys 0m0.388s 00:07:51.261 12:14:22 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.261 12:14:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:51.261 ************************************ 00:07:51.261 END TEST json_config_extra_key 00:07:51.261 ************************************ 00:07:51.261 12:14:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:51.261 12:14:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.261 12:14:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.261 12:14:22 -- common/autotest_common.sh@10 -- # set +x 00:07:51.261 ************************************ 00:07:51.261 START TEST alias_rpc 00:07:51.261 ************************************ 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:51.261 * Looking for test storage... 00:07:51.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.261 12:14:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.261 --rc genhtml_branch_coverage=1 00:07:51.261 --rc genhtml_function_coverage=1 00:07:51.261 --rc genhtml_legend=1 00:07:51.261 --rc geninfo_all_blocks=1 00:07:51.261 --rc geninfo_unexecuted_blocks=1 00:07:51.261 00:07:51.261 ' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.261 --rc genhtml_branch_coverage=1 00:07:51.261 --rc genhtml_function_coverage=1 00:07:51.261 --rc genhtml_legend=1 00:07:51.261 --rc geninfo_all_blocks=1 00:07:51.261 --rc geninfo_unexecuted_blocks=1 00:07:51.261 00:07:51.261 ' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.261 --rc genhtml_branch_coverage=1 00:07:51.261 --rc genhtml_function_coverage=1 00:07:51.261 --rc genhtml_legend=1 00:07:51.261 --rc geninfo_all_blocks=1 00:07:51.261 --rc geninfo_unexecuted_blocks=1 00:07:51.261 00:07:51.261 ' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.261 --rc genhtml_branch_coverage=1 00:07:51.261 --rc genhtml_function_coverage=1 00:07:51.261 --rc genhtml_legend=1 00:07:51.261 --rc geninfo_all_blocks=1 00:07:51.261 --rc geninfo_unexecuted_blocks=1 00:07:51.261 00:07:51.261 ' 00:07:51.261 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:51.261 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4167673 00:07:51.261 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.261 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4167673 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 4167673 ']' 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.261 12:14:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.520 [2024-11-06 12:14:22.929324] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:51.520 [2024-11-06 12:14:22.929390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167673 ] 00:07:51.520 [2024-11-06 12:14:23.025043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.520 [2024-11-06 12:14:23.073508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.455 12:14:23 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.455 12:14:23 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:52.455 12:14:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:52.714 12:14:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4167673 00:07:52.714 12:14:24 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 4167673 ']' 00:07:52.714 12:14:24 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 4167673 00:07:52.714 12:14:24 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:52.714 12:14:24 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.714 12:14:24 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4167673 00:07:52.715 12:14:24 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.715 12:14:24 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.715 12:14:24 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4167673' 00:07:52.715 killing process with pid 4167673 00:07:52.715 12:14:24 alias_rpc -- common/autotest_common.sh@971 -- # kill 4167673 00:07:52.715 12:14:24 alias_rpc -- common/autotest_common.sh@976 -- # wait 4167673 00:07:52.973 00:07:52.973 real 0m1.805s 00:07:52.973 user 0m2.070s 00:07:52.973 sys 0m0.483s 00:07:52.973 12:14:24 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.973 12:14:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.973 ************************************ 00:07:52.973 END TEST alias_rpc 00:07:52.973 ************************************ 00:07:52.973 12:14:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:52.973 12:14:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:52.973 12:14:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:52.973 12:14:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.973 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:07:52.973 ************************************ 00:07:52.973 START TEST spdkcli_tcp 00:07:52.973 ************************************ 00:07:52.973 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:53.231 * Looking for test storage... 00:07:53.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.231 12:14:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.231 --rc genhtml_branch_coverage=1 00:07:53.231 --rc genhtml_function_coverage=1 00:07:53.231 --rc genhtml_legend=1 00:07:53.231 --rc geninfo_all_blocks=1 00:07:53.231 --rc geninfo_unexecuted_blocks=1 00:07:53.231 00:07:53.231 ' 00:07:53.231 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.232 --rc genhtml_branch_coverage=1 00:07:53.232 --rc genhtml_function_coverage=1 00:07:53.232 --rc genhtml_legend=1 00:07:53.232 --rc geninfo_all_blocks=1 00:07:53.232 --rc geninfo_unexecuted_blocks=1 00:07:53.232 00:07:53.232 ' 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.232 --rc genhtml_branch_coverage=1 00:07:53.232 --rc genhtml_function_coverage=1 00:07:53.232 --rc genhtml_legend=1 00:07:53.232 --rc geninfo_all_blocks=1 00:07:53.232 --rc geninfo_unexecuted_blocks=1 00:07:53.232 00:07:53.232 ' 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.232 --rc genhtml_branch_coverage=1 00:07:53.232 --rc genhtml_function_coverage=1 00:07:53.232 --rc genhtml_legend=1 00:07:53.232 --rc geninfo_all_blocks=1 00:07:53.232 --rc geninfo_unexecuted_blocks=1 00:07:53.232 00:07:53.232 ' 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4168010 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:53.232 12:14:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4168010 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 4168010 ']' 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.232 12:14:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.232 [2024-11-06 12:14:24.802856] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:53.232 [2024-11-06 12:14:24.802921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168010 ] 00:07:53.491 [2024-11-06 12:14:24.893308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.491 [2024-11-06 12:14:24.943220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.491 [2024-11-06 12:14:24.943227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.750 12:14:25 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.750 12:14:25 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:53.750 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4168263 00:07:53.750 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:53.750 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:54.010 [ 00:07:54.010 "bdev_malloc_delete", 00:07:54.010 "bdev_malloc_create", 00:07:54.010 "bdev_null_resize", 00:07:54.010 "bdev_null_delete", 00:07:54.010 "bdev_null_create", 00:07:54.010 "bdev_nvme_cuse_unregister", 00:07:54.010 "bdev_nvme_cuse_register", 00:07:54.010 "bdev_opal_new_user", 00:07:54.010 "bdev_opal_set_lock_state", 00:07:54.010 "bdev_opal_delete", 00:07:54.010 "bdev_opal_get_info", 00:07:54.010 "bdev_opal_create", 00:07:54.010 "bdev_nvme_opal_revert", 00:07:54.010 "bdev_nvme_opal_init", 00:07:54.010 "bdev_nvme_send_cmd", 00:07:54.010 "bdev_nvme_set_keys", 00:07:54.010 "bdev_nvme_get_path_iostat", 00:07:54.010 "bdev_nvme_get_mdns_discovery_info", 00:07:54.010 "bdev_nvme_stop_mdns_discovery", 00:07:54.010 "bdev_nvme_start_mdns_discovery", 00:07:54.010 "bdev_nvme_set_multipath_policy", 00:07:54.010 "bdev_nvme_set_preferred_path", 00:07:54.010 "bdev_nvme_get_io_paths", 00:07:54.010 "bdev_nvme_remove_error_injection", 00:07:54.010 "bdev_nvme_add_error_injection", 00:07:54.010 "bdev_nvme_get_discovery_info", 00:07:54.010 "bdev_nvme_stop_discovery", 00:07:54.010 "bdev_nvme_start_discovery", 00:07:54.010 "bdev_nvme_get_controller_health_info", 00:07:54.010 "bdev_nvme_disable_controller", 00:07:54.010 "bdev_nvme_enable_controller", 00:07:54.010 "bdev_nvme_reset_controller", 00:07:54.010 "bdev_nvme_get_transport_statistics", 00:07:54.010 "bdev_nvme_apply_firmware", 00:07:54.010 "bdev_nvme_detach_controller", 00:07:54.010 "bdev_nvme_get_controllers", 00:07:54.010 "bdev_nvme_attach_controller", 00:07:54.010 "bdev_nvme_set_hotplug", 00:07:54.010 "bdev_nvme_set_options", 00:07:54.010 "bdev_passthru_delete", 00:07:54.010 "bdev_passthru_create", 00:07:54.010 "bdev_lvol_set_parent_bdev", 00:07:54.010 "bdev_lvol_set_parent", 00:07:54.010 "bdev_lvol_check_shallow_copy", 00:07:54.010 "bdev_lvol_start_shallow_copy", 00:07:54.010 "bdev_lvol_grow_lvstore", 00:07:54.010 "bdev_lvol_get_lvols", 00:07:54.010 "bdev_lvol_get_lvstores", 00:07:54.010 "bdev_lvol_delete", 00:07:54.010 "bdev_lvol_set_read_only", 00:07:54.010 "bdev_lvol_resize", 00:07:54.010 "bdev_lvol_decouple_parent", 00:07:54.010 "bdev_lvol_inflate", 00:07:54.010 "bdev_lvol_rename", 00:07:54.010 "bdev_lvol_clone_bdev", 00:07:54.010 "bdev_lvol_clone", 00:07:54.010 "bdev_lvol_snapshot", 00:07:54.010 "bdev_lvol_create", 00:07:54.010 "bdev_lvol_delete_lvstore", 00:07:54.010 "bdev_lvol_rename_lvstore", 00:07:54.010 "bdev_lvol_create_lvstore", 00:07:54.010 "bdev_raid_set_options", 00:07:54.010 "bdev_raid_remove_base_bdev", 00:07:54.010 "bdev_raid_add_base_bdev", 00:07:54.010 "bdev_raid_delete", 00:07:54.010 "bdev_raid_create", 00:07:54.010 "bdev_raid_get_bdevs", 00:07:54.010 "bdev_error_inject_error", 00:07:54.010 "bdev_error_delete", 00:07:54.010 "bdev_error_create", 00:07:54.010 "bdev_split_delete", 00:07:54.010 "bdev_split_create", 00:07:54.010 "bdev_delay_delete", 00:07:54.010 "bdev_delay_create", 00:07:54.010 "bdev_delay_update_latency", 00:07:54.010 "bdev_zone_block_delete", 00:07:54.010 "bdev_zone_block_create", 00:07:54.010 "blobfs_create", 00:07:54.010 "blobfs_detect", 00:07:54.010 "blobfs_set_cache_size", 00:07:54.010 "bdev_aio_delete", 00:07:54.010 "bdev_aio_rescan", 00:07:54.010 "bdev_aio_create", 00:07:54.010 "bdev_ftl_set_property", 00:07:54.010 "bdev_ftl_get_properties", 00:07:54.010 "bdev_ftl_get_stats", 00:07:54.010 "bdev_ftl_unmap", 00:07:54.010 "bdev_ftl_unload", 00:07:54.010 "bdev_ftl_delete", 00:07:54.010 "bdev_ftl_load", 00:07:54.010 "bdev_ftl_create", 00:07:54.010 "bdev_virtio_attach_controller", 00:07:54.010 "bdev_virtio_scsi_get_devices", 00:07:54.010 "bdev_virtio_detach_controller", 00:07:54.010 "bdev_virtio_blk_set_hotplug", 00:07:54.010 "bdev_iscsi_delete", 00:07:54.010 "bdev_iscsi_create", 00:07:54.010 "bdev_iscsi_set_options", 00:07:54.010 "accel_error_inject_error", 00:07:54.010 "ioat_scan_accel_module", 00:07:54.010 "dsa_scan_accel_module", 00:07:54.010 "iaa_scan_accel_module", 00:07:54.010 "vfu_virtio_create_fs_endpoint", 00:07:54.010 "vfu_virtio_create_scsi_endpoint", 00:07:54.010 "vfu_virtio_scsi_remove_target", 00:07:54.010 "vfu_virtio_scsi_add_target", 00:07:54.010 "vfu_virtio_create_blk_endpoint", 00:07:54.010 "vfu_virtio_delete_endpoint", 00:07:54.010 "keyring_file_remove_key", 00:07:54.010 "keyring_file_add_key", 00:07:54.010 "keyring_linux_set_options", 00:07:54.010 "fsdev_aio_delete", 00:07:54.010 "fsdev_aio_create", 00:07:54.010 "iscsi_get_histogram", 00:07:54.010 "iscsi_enable_histogram", 00:07:54.010 "iscsi_set_options", 00:07:54.010 "iscsi_get_auth_groups", 00:07:54.010 "iscsi_auth_group_remove_secret", 00:07:54.010 "iscsi_auth_group_add_secret", 00:07:54.010 "iscsi_delete_auth_group", 00:07:54.010 "iscsi_create_auth_group", 00:07:54.010 "iscsi_set_discovery_auth", 00:07:54.010 "iscsi_get_options", 00:07:54.011 "iscsi_target_node_request_logout", 00:07:54.011 "iscsi_target_node_set_redirect", 00:07:54.011 "iscsi_target_node_set_auth", 00:07:54.011 "iscsi_target_node_add_lun", 00:07:54.011 "iscsi_get_stats", 00:07:54.011 "iscsi_get_connections", 00:07:54.011 "iscsi_portal_group_set_auth", 00:07:54.011 "iscsi_start_portal_group", 00:07:54.011 "iscsi_delete_portal_group", 00:07:54.011 "iscsi_create_portal_group", 00:07:54.011 "iscsi_get_portal_groups", 00:07:54.011 "iscsi_delete_target_node", 00:07:54.011 "iscsi_target_node_remove_pg_ig_maps", 00:07:54.011 "iscsi_target_node_add_pg_ig_maps", 00:07:54.011 "iscsi_create_target_node", 00:07:54.011 "iscsi_get_target_nodes", 00:07:54.011 "iscsi_delete_initiator_group", 00:07:54.011 "iscsi_initiator_group_remove_initiators", 00:07:54.011 "iscsi_initiator_group_add_initiators", 00:07:54.011 "iscsi_create_initiator_group", 00:07:54.011 "iscsi_get_initiator_groups", 00:07:54.011 "nvmf_set_crdt", 00:07:54.011 "nvmf_set_config", 00:07:54.011 "nvmf_set_max_subsystems", 00:07:54.011 "nvmf_stop_mdns_prr", 00:07:54.011 "nvmf_publish_mdns_prr", 00:07:54.011 "nvmf_subsystem_get_listeners", 00:07:54.011 "nvmf_subsystem_get_qpairs", 00:07:54.011 "nvmf_subsystem_get_controllers", 00:07:54.011 "nvmf_get_stats", 00:07:54.011 "nvmf_get_transports", 00:07:54.011 "nvmf_create_transport", 00:07:54.011 "nvmf_get_targets", 00:07:54.011 "nvmf_delete_target", 00:07:54.011 "nvmf_create_target", 00:07:54.011 "nvmf_subsystem_allow_any_host", 00:07:54.011 "nvmf_subsystem_set_keys", 00:07:54.011 "nvmf_subsystem_remove_host", 00:07:54.011 "nvmf_subsystem_add_host", 00:07:54.011 "nvmf_ns_remove_host", 00:07:54.011 "nvmf_ns_add_host", 00:07:54.011 "nvmf_subsystem_remove_ns", 00:07:54.011 "nvmf_subsystem_set_ns_ana_group", 00:07:54.011 "nvmf_subsystem_add_ns", 00:07:54.011 "nvmf_subsystem_listener_set_ana_state", 00:07:54.011 "nvmf_discovery_get_referrals", 00:07:54.011 "nvmf_discovery_remove_referral", 00:07:54.011 "nvmf_discovery_add_referral", 00:07:54.011 "nvmf_subsystem_remove_listener", 00:07:54.011 "nvmf_subsystem_add_listener", 00:07:54.011 "nvmf_delete_subsystem", 00:07:54.011 "nvmf_create_subsystem", 00:07:54.011 "nvmf_get_subsystems", 00:07:54.011 "env_dpdk_get_mem_stats", 00:07:54.011 "nbd_get_disks", 00:07:54.011 "nbd_stop_disk", 00:07:54.011 "nbd_start_disk", 00:07:54.011 "ublk_recover_disk", 00:07:54.011 "ublk_get_disks", 00:07:54.011 "ublk_stop_disk", 00:07:54.011 "ublk_start_disk", 00:07:54.011 "ublk_destroy_target", 00:07:54.011 "ublk_create_target", 00:07:54.011 "virtio_blk_create_transport", 00:07:54.011 "virtio_blk_get_transports", 00:07:54.011 "vhost_controller_set_coalescing", 00:07:54.011 "vhost_get_controllers", 00:07:54.011 "vhost_delete_controller", 00:07:54.011 "vhost_create_blk_controller", 00:07:54.011 "vhost_scsi_controller_remove_target", 00:07:54.011 "vhost_scsi_controller_add_target", 00:07:54.011 "vhost_start_scsi_controller", 00:07:54.011 "vhost_create_scsi_controller", 00:07:54.011 "thread_set_cpumask", 00:07:54.011 "scheduler_set_options", 00:07:54.011 "framework_get_governor", 00:07:54.011 "framework_get_scheduler", 00:07:54.011 "framework_set_scheduler", 00:07:54.011 "framework_get_reactors", 00:07:54.011 "thread_get_io_channels", 00:07:54.011 "thread_get_pollers", 00:07:54.011 "thread_get_stats", 00:07:54.011 "framework_monitor_context_switch", 00:07:54.011 "spdk_kill_instance", 00:07:54.011 "log_enable_timestamps", 00:07:54.011 "log_get_flags", 00:07:54.011 "log_clear_flag", 00:07:54.011 "log_set_flag", 00:07:54.011 "log_get_level", 00:07:54.011 "log_set_level", 00:07:54.011 "log_get_print_level", 00:07:54.011 "log_set_print_level", 00:07:54.011 "framework_enable_cpumask_locks", 00:07:54.011 "framework_disable_cpumask_locks", 00:07:54.011 "framework_wait_init", 00:07:54.011 "framework_start_init", 00:07:54.011 "scsi_get_devices", 00:07:54.011 "bdev_get_histogram", 00:07:54.011 "bdev_enable_histogram", 00:07:54.011 "bdev_set_qos_limit", 00:07:54.011 "bdev_set_qd_sampling_period", 00:07:54.011 "bdev_get_bdevs", 00:07:54.011 "bdev_reset_iostat", 00:07:54.011 "bdev_get_iostat", 00:07:54.011 "bdev_examine", 00:07:54.011 "bdev_wait_for_examine", 00:07:54.011 "bdev_set_options", 00:07:54.011 "accel_get_stats", 00:07:54.011 "accel_set_options", 00:07:54.011 "accel_set_driver", 00:07:54.011 "accel_crypto_key_destroy", 00:07:54.011 "accel_crypto_keys_get", 00:07:54.011 "accel_crypto_key_create", 00:07:54.011 "accel_assign_opc", 00:07:54.011 "accel_get_module_info", 00:07:54.011 "accel_get_opc_assignments", 00:07:54.011 "vmd_rescan", 00:07:54.011 "vmd_remove_device", 00:07:54.011 "vmd_enable", 00:07:54.011 "sock_get_default_impl", 00:07:54.011 "sock_set_default_impl", 00:07:54.011 "sock_impl_set_options", 00:07:54.011 "sock_impl_get_options", 00:07:54.011 "iobuf_get_stats", 00:07:54.011 "iobuf_set_options", 00:07:54.011 "keyring_get_keys", 00:07:54.011 "vfu_tgt_set_base_path", 00:07:54.011 "framework_get_pci_devices", 00:07:54.011 "framework_get_config", 00:07:54.011 "framework_get_subsystems", 00:07:54.011 "fsdev_set_opts", 00:07:54.011 "fsdev_get_opts", 00:07:54.011 "trace_get_info", 00:07:54.011 "trace_get_tpoint_group_mask", 00:07:54.011 "trace_disable_tpoint_group", 00:07:54.011 "trace_enable_tpoint_group", 00:07:54.011 "trace_clear_tpoint_mask", 00:07:54.011 "trace_set_tpoint_mask", 00:07:54.011 "notify_get_notifications", 00:07:54.011 "notify_get_types", 00:07:54.011 "spdk_get_version", 00:07:54.011 "rpc_get_methods" 00:07:54.011 ] 00:07:54.011 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.011 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:54.011 12:14:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4168010 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 4168010 ']' 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 4168010 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4168010 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4168010' 00:07:54.011 killing process with pid 4168010 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 4168010 00:07:54.011 12:14:25 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 4168010 00:07:54.270 00:07:54.270 real 0m1.312s 00:07:54.270 user 0m2.325s 00:07:54.270 sys 0m0.464s 00:07:54.270 12:14:25 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.270 12:14:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.270 ************************************ 00:07:54.270 END TEST spdkcli_tcp 00:07:54.270 ************************************ 00:07:54.530 12:14:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:54.530 12:14:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:54.530 12:14:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.530 12:14:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.530 ************************************ 00:07:54.530 START TEST dpdk_mem_utility 00:07:54.530 ************************************ 00:07:54.530 12:14:25 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:54.530 * Looking for test storage... 00:07:54.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:54.530 12:14:25 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.530 12:14:25 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.530 12:14:25 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.530 12:14:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:54.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.530 --rc genhtml_branch_coverage=1 00:07:54.530 --rc genhtml_function_coverage=1 00:07:54.530 --rc genhtml_legend=1 00:07:54.530 --rc geninfo_all_blocks=1 00:07:54.530 --rc geninfo_unexecuted_blocks=1 00:07:54.530 00:07:54.530 ' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:54.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.530 --rc genhtml_branch_coverage=1 00:07:54.530 --rc genhtml_function_coverage=1 00:07:54.530 --rc genhtml_legend=1 00:07:54.530 --rc geninfo_all_blocks=1 00:07:54.530 --rc geninfo_unexecuted_blocks=1 00:07:54.530 00:07:54.530 ' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:54.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.530 --rc genhtml_branch_coverage=1 00:07:54.530 --rc genhtml_function_coverage=1 00:07:54.530 --rc genhtml_legend=1 00:07:54.530 --rc geninfo_all_blocks=1 00:07:54.530 --rc geninfo_unexecuted_blocks=1 00:07:54.530 00:07:54.530 ' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:54.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.530 --rc genhtml_branch_coverage=1 00:07:54.530 --rc genhtml_function_coverage=1 00:07:54.530 --rc genhtml_legend=1 00:07:54.530 --rc geninfo_all_blocks=1 00:07:54.530 --rc geninfo_unexecuted_blocks=1 00:07:54.530 00:07:54.530 ' 00:07:54.530 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:54.530 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4168348 00:07:54.530 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4168348 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 4168348 ']' 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.530 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:54.530 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:54.789 [2024-11-06 12:14:26.149856] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:54.789 [2024-11-06 12:14:26.149918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168348 ] 00:07:54.789 [2024-11-06 12:14:26.243080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.789 [2024-11-06 12:14:26.293643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.048 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.048 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:55.048 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:55.048 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:55.048 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.048 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.048 { 00:07:55.048 "filename": "/tmp/spdk_mem_dump.txt" 00:07:55.048 } 00:07:55.048 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.048 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:55.048 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:55.048 1 heaps totaling size 810.000000 MiB 00:07:55.048 size: 810.000000 MiB heap id: 0 00:07:55.048 end heaps---------- 00:07:55.048 9 mempools totaling size 595.772034 MiB 00:07:55.048 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:55.048 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:55.048 size: 92.545471 MiB name: bdev_io_4168348 00:07:55.048 size: 50.003479 MiB name: msgpool_4168348 00:07:55.048 size: 36.509338 MiB name: fsdev_io_4168348 00:07:55.048 size: 21.763794 MiB name: PDU_Pool 00:07:55.048 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:55.048 size: 4.133484 MiB name: evtpool_4168348 00:07:55.048 size: 0.026123 MiB name: Session_Pool 00:07:55.048 end mempools------- 00:07:55.048 6 memzones totaling size 4.142822 MiB 00:07:55.048 size: 1.000366 MiB name: RG_ring_0_4168348 00:07:55.048 size: 1.000366 MiB name: RG_ring_1_4168348 00:07:55.049 size: 1.000366 MiB name: RG_ring_4_4168348 00:07:55.049 size: 1.000366 MiB name: RG_ring_5_4168348 00:07:55.049 size: 0.125366 MiB name: RG_ring_2_4168348 00:07:55.049 size: 0.015991 MiB name: RG_ring_3_4168348 00:07:55.049 end memzones------- 00:07:55.049 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:55.049 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:55.049 list of free elements. size: 10.862488 MiB 00:07:55.049 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:55.049 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:55.049 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:55.049 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:55.049 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:55.049 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:55.049 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:55.049 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:55.049 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:55.049 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:55.049 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:55.049 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:55.049 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:55.049 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:55.049 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:55.049 list of standard malloc elements. size: 199.218628 MiB 00:07:55.049 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:55.049 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:55.049 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:55.049 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:55.049 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:55.049 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:55.049 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:55.049 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:55.049 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:55.049 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:55.049 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:55.049 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:55.049 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:55.049 list of memzone associated elements. size: 599.918884 MiB 00:07:55.049 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:55.049 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:55.049 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:55.049 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:55.049 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:55.049 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_4168348_0 00:07:55.049 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:55.049 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4168348_0 00:07:55.049 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:55.049 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4168348_0 00:07:55.049 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:55.049 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:55.049 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:55.049 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:55.049 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:55.049 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4168348_0 00:07:55.049 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:55.049 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4168348 00:07:55.049 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:55.049 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4168348 00:07:55.049 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:55.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:55.049 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:55.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:55.049 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:55.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:55.049 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:55.049 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:55.049 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:55.049 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4168348 00:07:55.049 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:55.049 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4168348 00:07:55.049 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:55.049 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4168348 00:07:55.049 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:55.049 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4168348 00:07:55.049 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:55.049 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4168348 00:07:55.049 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:55.049 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4168348 00:07:55.049 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:55.049 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:55.049 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:55.049 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:55.049 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:55.049 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:55.049 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:55.049 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4168348 00:07:55.049 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:55.049 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4168348 00:07:55.049 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:55.049 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:55.049 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:55.049 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:55.049 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:55.049 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4168348 00:07:55.049 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:55.049 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:55.049 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:55.049 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4168348 00:07:55.049 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:55.049 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4168348 00:07:55.049 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:55.049 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4168348 00:07:55.049 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:55.049 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:55.049 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:55.309 12:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4168348 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 4168348 ']' 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 4168348 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4168348 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4168348' 00:07:55.309 killing process with pid 4168348 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 4168348 00:07:55.309 12:14:26 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 4168348 00:07:55.568 00:07:55.568 real 0m1.126s 00:07:55.568 user 0m1.137s 00:07:55.568 sys 0m0.441s 00:07:55.568 12:14:27 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.568 12:14:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.568 ************************************ 00:07:55.568 END TEST dpdk_mem_utility 00:07:55.568 ************************************ 00:07:55.568 12:14:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:55.568 12:14:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.568 12:14:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.568 12:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.568 ************************************ 00:07:55.568 START TEST event 00:07:55.568 ************************************ 00:07:55.568 12:14:27 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:55.827 * Looking for test storage... 00:07:55.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.827 12:14:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.827 12:14:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.827 12:14:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.827 12:14:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.827 12:14:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.827 12:14:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.827 12:14:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.827 12:14:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.827 12:14:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.827 12:14:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.827 12:14:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.827 12:14:27 event -- scripts/common.sh@344 -- # case "$op" in 00:07:55.827 12:14:27 event -- scripts/common.sh@345 -- # : 1 00:07:55.827 12:14:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.827 12:14:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.827 12:14:27 event -- scripts/common.sh@365 -- # decimal 1 00:07:55.827 12:14:27 event -- scripts/common.sh@353 -- # local d=1 00:07:55.827 12:14:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.827 12:14:27 event -- scripts/common.sh@355 -- # echo 1 00:07:55.827 12:14:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.827 12:14:27 event -- scripts/common.sh@366 -- # decimal 2 00:07:55.827 12:14:27 event -- scripts/common.sh@353 -- # local d=2 00:07:55.827 12:14:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.827 12:14:27 event -- scripts/common.sh@355 -- # echo 2 00:07:55.827 12:14:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.827 12:14:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.827 12:14:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.827 12:14:27 event -- scripts/common.sh@368 -- # return 0 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.827 --rc genhtml_branch_coverage=1 00:07:55.827 --rc genhtml_function_coverage=1 00:07:55.827 --rc genhtml_legend=1 00:07:55.827 --rc geninfo_all_blocks=1 00:07:55.827 --rc geninfo_unexecuted_blocks=1 00:07:55.827 00:07:55.827 ' 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.827 --rc genhtml_branch_coverage=1 00:07:55.827 --rc genhtml_function_coverage=1 00:07:55.827 --rc genhtml_legend=1 00:07:55.827 --rc geninfo_all_blocks=1 00:07:55.827 --rc geninfo_unexecuted_blocks=1 00:07:55.827 00:07:55.827 ' 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.827 --rc genhtml_branch_coverage=1 00:07:55.827 --rc genhtml_function_coverage=1 00:07:55.827 --rc genhtml_legend=1 00:07:55.827 --rc geninfo_all_blocks=1 00:07:55.827 --rc geninfo_unexecuted_blocks=1 00:07:55.827 00:07:55.827 ' 00:07:55.827 12:14:27 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.827 --rc genhtml_branch_coverage=1 00:07:55.827 --rc genhtml_function_coverage=1 00:07:55.827 --rc genhtml_legend=1 00:07:55.827 --rc geninfo_all_blocks=1 00:07:55.827 --rc geninfo_unexecuted_blocks=1 00:07:55.827 00:07:55.827 ' 00:07:55.827 12:14:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:55.827 12:14:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:55.828 12:14:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:55.828 12:14:27 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:55.828 12:14:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.828 12:14:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.828 ************************************ 00:07:55.828 START TEST event_perf 00:07:55.828 ************************************ 00:07:55.828 12:14:27 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:55.828 Running I/O for 1 seconds...[2024-11-06 12:14:27.332291] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:55.828 [2024-11-06 12:14:27.332361] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168671 ] 00:07:55.828 [2024-11-06 12:14:27.426841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.086 [2024-11-06 12:14:27.480899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.086 [2024-11-06 12:14:27.480994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.086 [2024-11-06 12:14:27.481226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.086 [2024-11-06 12:14:27.481230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.022 Running I/O for 1 seconds... 00:07:57.022 lcore 0: 190755 00:07:57.022 lcore 1: 190748 00:07:57.022 lcore 2: 190751 00:07:57.022 lcore 3: 190752 00:07:57.022 done. 00:07:57.022 00:07:57.022 real 0m1.215s 00:07:57.022 user 0m4.124s 00:07:57.022 sys 0m0.086s 00:07:57.022 12:14:28 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.022 12:14:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.022 ************************************ 00:07:57.022 END TEST event_perf 00:07:57.022 ************************************ 00:07:57.022 12:14:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:57.022 12:14:28 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:57.022 12:14:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.022 12:14:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.022 ************************************ 00:07:57.022 START TEST event_reactor 00:07:57.022 ************************************ 00:07:57.022 12:14:28 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:57.022 [2024-11-06 12:14:28.605873] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:57.022 [2024-11-06 12:14:28.605940] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168955 ] 00:07:57.281 [2024-11-06 12:14:28.701332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.281 [2024-11-06 12:14:28.749312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.217 test_start 00:07:58.217 oneshot 00:07:58.217 tick 100 00:07:58.217 tick 100 00:07:58.217 tick 250 00:07:58.217 tick 100 00:07:58.217 tick 100 00:07:58.217 tick 250 00:07:58.217 tick 100 00:07:58.217 tick 500 00:07:58.217 tick 100 00:07:58.217 tick 100 00:07:58.217 tick 250 00:07:58.217 tick 100 00:07:58.217 tick 100 00:07:58.217 test_end 00:07:58.217 00:07:58.217 real 0m1.208s 00:07:58.217 user 0m1.118s 00:07:58.217 sys 0m0.085s 00:07:58.217 12:14:29 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.217 12:14:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:58.217 ************************************ 00:07:58.217 END TEST event_reactor 00:07:58.217 ************************************ 00:07:58.217 12:14:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:58.217 12:14:29 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:58.217 12:14:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.217 12:14:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 ************************************ 00:07:58.475 START TEST event_reactor_perf 00:07:58.475 ************************************ 00:07:58.475 12:14:29 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:58.475 [2024-11-06 12:14:29.867825] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:58.475 [2024-11-06 12:14:29.867875] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169235 ] 00:07:58.475 [2024-11-06 12:14:29.961195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.475 [2024-11-06 12:14:30.009612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.853 test_start 00:07:59.853 test_end 00:07:59.853 Performance: 313988 events per second 00:07:59.853 00:07:59.853 real 0m1.206s 00:07:59.853 user 0m1.118s 00:07:59.853 sys 0m0.082s 00:07:59.853 12:14:31 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.853 12:14:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.853 ************************************ 00:07:59.853 END TEST event_reactor_perf 00:07:59.853 ************************************ 00:07:59.853 12:14:31 event -- event/event.sh@49 -- # uname -s 00:07:59.853 12:14:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:59.853 12:14:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:59.853 12:14:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.853 12:14:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.853 12:14:31 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.853 ************************************ 00:07:59.853 START TEST event_scheduler 00:07:59.853 ************************************ 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:59.853 * Looking for test storage... 00:07:59.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.853 12:14:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.853 --rc genhtml_branch_coverage=1 00:07:59.853 --rc genhtml_function_coverage=1 00:07:59.853 --rc genhtml_legend=1 00:07:59.853 --rc geninfo_all_blocks=1 00:07:59.853 --rc geninfo_unexecuted_blocks=1 00:07:59.853 00:07:59.853 ' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.853 --rc genhtml_branch_coverage=1 00:07:59.853 --rc genhtml_function_coverage=1 00:07:59.853 --rc genhtml_legend=1 00:07:59.853 --rc geninfo_all_blocks=1 00:07:59.853 --rc geninfo_unexecuted_blocks=1 00:07:59.853 00:07:59.853 ' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.853 --rc genhtml_branch_coverage=1 00:07:59.853 --rc genhtml_function_coverage=1 00:07:59.853 --rc genhtml_legend=1 00:07:59.853 --rc geninfo_all_blocks=1 00:07:59.853 --rc geninfo_unexecuted_blocks=1 00:07:59.853 00:07:59.853 ' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.853 --rc genhtml_branch_coverage=1 00:07:59.853 --rc genhtml_function_coverage=1 00:07:59.853 --rc genhtml_legend=1 00:07:59.853 --rc geninfo_all_blocks=1 00:07:59.853 --rc geninfo_unexecuted_blocks=1 00:07:59.853 00:07:59.853 ' 00:07:59.853 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:59.853 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4169557 00:07:59.853 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.853 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:59.853 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4169557 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 4169557 ']' 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.853 12:14:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:59.853 [2024-11-06 12:14:31.362856] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:07:59.853 [2024-11-06 12:14:31.362919] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169557 ] 00:07:59.853 [2024-11-06 12:14:31.427864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.853 [2024-11-06 12:14:31.468034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.853 [2024-11-06 12:14:31.468133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.853 [2024-11-06 12:14:31.468224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.853 [2024-11-06 12:14:31.468225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:08:00.113 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.113 [2024-11-06 12:14:31.592979] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:00.113 [2024-11-06 12:14:31.592996] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:00.113 [2024-11-06 12:14:31.593005] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:00.113 [2024-11-06 12:14:31.593011] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:00.113 [2024-11-06 12:14:31.593016] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.113 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.113 [2024-11-06 12:14:31.665769] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.113 12:14:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.113 ************************************ 00:08:00.113 START TEST scheduler_create_thread 00:08:00.113 ************************************ 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.113 2 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.113 3 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.113 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 4 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 5 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 6 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 7 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 8 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 9 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 10 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.372 12:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 12:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.939 00:08:00.939 real 0m0.594s 00:08:00.939 user 0m0.023s 00:08:00.939 sys 0m0.006s 00:08:00.939 12:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.939 12:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 ************************************ 00:08:00.939 END TEST scheduler_create_thread 00:08:00.939 ************************************ 00:08:00.939 12:14:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:00.939 12:14:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4169557 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 4169557 ']' 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 4169557 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4169557 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4169557' 00:08:00.939 killing process with pid 4169557 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 4169557 00:08:00.939 12:14:32 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 4169557 00:08:01.198 [2024-11-06 12:14:32.777507] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:01.457 00:08:01.457 real 0m1.803s 00:08:01.457 user 0m2.489s 00:08:01.457 sys 0m0.391s 00:08:01.457 12:14:32 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.457 12:14:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 ************************************ 00:08:01.457 END TEST event_scheduler 00:08:01.457 ************************************ 00:08:01.457 12:14:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:01.457 12:14:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:01.457 12:14:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.457 12:14:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.458 12:14:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.458 ************************************ 00:08:01.458 START TEST app_repeat 00:08:01.458 ************************************ 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4169872 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4169872' 00:08:01.458 Process app_repeat pid: 4169872 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:01.458 spdk_app_start Round 0 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4169872 /var/tmp/spdk-nbd.sock 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 4169872 ']' 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:01.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.458 12:14:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:01.458 12:14:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:01.458 [2024-11-06 12:14:33.028533] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:01.458 [2024-11-06 12:14:33.028586] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169872 ] 00:08:01.717 [2024-11-06 12:14:33.123359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.717 [2024-11-06 12:14:33.175799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.717 [2024-11-06 12:14:33.175807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.717 12:14:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.717 12:14:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:01.717 12:14:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.975 Malloc0 00:08:01.975 12:14:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.233 Malloc1 00:08:02.491 12:14:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.491 12:14:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:02.750 /dev/nbd0 00:08:02.750 12:14:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.750 12:14:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.750 1+0 records in 00:08:02.750 1+0 records out 00:08:02.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221688 s, 18.5 MB/s 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:02.750 12:14:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:02.750 12:14:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.750 12:14:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.750 12:14:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:03.008 /dev/nbd1 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.008 1+0 records in 00:08:03.008 1+0 records out 00:08:03.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222515 s, 18.4 MB/s 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:03.008 12:14:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.008 12:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.267 { 00:08:03.267 "nbd_device": "/dev/nbd0", 00:08:03.267 "bdev_name": "Malloc0" 00:08:03.267 }, 00:08:03.267 { 00:08:03.267 "nbd_device": "/dev/nbd1", 00:08:03.267 "bdev_name": "Malloc1" 00:08:03.267 } 00:08:03.267 ]' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.267 { 00:08:03.267 "nbd_device": "/dev/nbd0", 00:08:03.267 "bdev_name": "Malloc0" 00:08:03.267 }, 00:08:03.267 { 00:08:03.267 "nbd_device": "/dev/nbd1", 00:08:03.267 "bdev_name": "Malloc1" 00:08:03.267 } 00:08:03.267 ]' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.267 /dev/nbd1' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.267 /dev/nbd1' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:03.267 256+0 records in 00:08:03.267 256+0 records out 00:08:03.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107859 s, 97.2 MB/s 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.267 256+0 records in 00:08:03.267 256+0 records out 00:08:03.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198851 s, 52.7 MB/s 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.267 12:14:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.526 256+0 records in 00:08:03.526 256+0 records out 00:08:03.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212949 s, 49.2 MB/s 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.526 12:14:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.784 12:14:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.043 12:14:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.301 12:14:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.301 12:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:04.302 12:14:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:04.302 12:14:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.560 12:14:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.819 [2024-11-06 12:14:36.300016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.819 [2024-11-06 12:14:36.347717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.819 [2024-11-06 12:14:36.347722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.819 [2024-11-06 12:14:36.391902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.819 [2024-11-06 12:14:36.391946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:08.104 12:14:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:08.104 12:14:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:08.104 spdk_app_start Round 1 00:08:08.104 12:14:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4169872 /var/tmp/spdk-nbd.sock 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 4169872 ']' 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.104 12:14:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:08.104 12:14:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.104 Malloc0 00:08:08.104 12:14:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.363 Malloc1 00:08:08.363 12:14:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.363 12:14:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:08.622 /dev/nbd0 00:08:08.622 12:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.622 12:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.622 1+0 records in 00:08:08.622 1+0 records out 00:08:08.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208228 s, 19.7 MB/s 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.622 12:14:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:08.622 12:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.622 12:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.622 12:14:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:08.881 /dev/nbd1 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.881 1+0 records in 00:08:08.881 1+0 records out 00:08:08.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216251 s, 18.9 MB/s 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.881 12:14:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.881 12:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.140 { 00:08:09.140 "nbd_device": "/dev/nbd0", 00:08:09.140 "bdev_name": "Malloc0" 00:08:09.140 }, 00:08:09.140 { 00:08:09.140 "nbd_device": "/dev/nbd1", 00:08:09.140 "bdev_name": "Malloc1" 00:08:09.140 } 00:08:09.140 ]' 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.140 { 00:08:09.140 "nbd_device": "/dev/nbd0", 00:08:09.140 "bdev_name": "Malloc0" 00:08:09.140 }, 00:08:09.140 { 00:08:09.140 "nbd_device": "/dev/nbd1", 00:08:09.140 "bdev_name": "Malloc1" 00:08:09.140 } 00:08:09.140 ]' 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.140 /dev/nbd1' 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.140 /dev/nbd1' 00:08:09.140 12:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:09.399 256+0 records in 00:08:09.399 256+0 records out 00:08:09.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101474 s, 103 MB/s 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.399 256+0 records in 00:08:09.399 256+0 records out 00:08:09.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200248 s, 52.4 MB/s 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.399 256+0 records in 00:08:09.399 256+0 records out 00:08:09.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211251 s, 49.6 MB/s 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.399 12:14:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.657 12:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.657 12:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.657 12:14:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.658 12:14:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.916 12:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.175 12:14:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.175 12:14:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:10.742 12:14:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:10.742 [2024-11-06 12:14:42.242797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.742 [2024-11-06 12:14:42.288075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.742 [2024-11-06 12:14:42.288081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.742 [2024-11-06 12:14:42.333895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.742 [2024-11-06 12:14:42.333956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:14.030 12:14:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:14.030 12:14:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:14.030 spdk_app_start Round 2 00:08:14.030 12:14:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4169872 /var/tmp/spdk-nbd.sock 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 4169872 ']' 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:14.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.030 12:14:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:14.030 12:14:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:14.030 Malloc0 00:08:14.030 12:14:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:14.289 Malloc1 00:08:14.289 12:14:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.289 12:14:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:14.856 /dev/nbd0 00:08:14.856 12:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:14.856 12:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:14.856 12:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:14.856 1+0 records in 00:08:14.856 1+0 records out 00:08:14.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172826 s, 23.7 MB/s 00:08:14.857 12:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:14.857 12:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:14.857 12:14:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:14.857 12:14:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:14.857 12:14:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:14.857 12:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.857 12:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.857 12:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:15.115 /dev/nbd1 00:08:15.115 12:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:15.115 12:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:15.115 1+0 records in 00:08:15.115 1+0 records out 00:08:15.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230455 s, 17.8 MB/s 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:15.115 12:14:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:15.115 12:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.115 12:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.115 12:14:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.116 12:14:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.116 12:14:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.374 12:14:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:15.374 { 00:08:15.374 "nbd_device": "/dev/nbd0", 00:08:15.374 "bdev_name": "Malloc0" 00:08:15.374 }, 00:08:15.374 { 00:08:15.374 "nbd_device": "/dev/nbd1", 00:08:15.374 "bdev_name": "Malloc1" 00:08:15.374 } 00:08:15.374 ]' 00:08:15.374 12:14:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:15.374 { 00:08:15.374 "nbd_device": "/dev/nbd0", 00:08:15.374 "bdev_name": "Malloc0" 00:08:15.374 }, 00:08:15.375 { 00:08:15.375 "nbd_device": "/dev/nbd1", 00:08:15.375 "bdev_name": "Malloc1" 00:08:15.375 } 00:08:15.375 ]' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:15.375 /dev/nbd1' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:15.375 /dev/nbd1' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:15.375 256+0 records in 00:08:15.375 256+0 records out 00:08:15.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010442 s, 100 MB/s 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:15.375 256+0 records in 00:08:15.375 256+0 records out 00:08:15.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198432 s, 52.8 MB/s 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:15.375 256+0 records in 00:08:15.375 256+0 records out 00:08:15.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211217 s, 49.6 MB/s 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.375 12:14:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.942 12:14:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:16.200 12:14:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:16.200 12:14:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:16.768 12:14:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:16.768 [2024-11-06 12:14:48.257299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.768 [2024-11-06 12:14:48.302389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.768 [2024-11-06 12:14:48.302396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.768 [2024-11-06 12:14:48.348140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:16.768 [2024-11-06 12:14:48.348182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:20.078 12:14:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4169872 /var/tmp/spdk-nbd.sock 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 4169872 ']' 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:20.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:20.078 12:14:51 event.app_repeat -- event/event.sh@39 -- # killprocess 4169872 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 4169872 ']' 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 4169872 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4169872 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4169872' 00:08:20.078 killing process with pid 4169872 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@971 -- # kill 4169872 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@976 -- # wait 4169872 00:08:20.078 spdk_app_start is called in Round 0. 00:08:20.078 Shutdown signal received, stop current app iteration 00:08:20.078 Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 reinitialization... 00:08:20.078 spdk_app_start is called in Round 1. 00:08:20.078 Shutdown signal received, stop current app iteration 00:08:20.078 Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 reinitialization... 00:08:20.078 spdk_app_start is called in Round 2. 00:08:20.078 Shutdown signal received, stop current app iteration 00:08:20.078 Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 reinitialization... 00:08:20.078 spdk_app_start is called in Round 3. 00:08:20.078 Shutdown signal received, stop current app iteration 00:08:20.078 12:14:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:20.078 12:14:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:20.078 00:08:20.078 real 0m18.572s 00:08:20.078 user 0m41.764s 00:08:20.078 sys 0m3.187s 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.078 12:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:20.078 ************************************ 00:08:20.078 END TEST app_repeat 00:08:20.078 ************************************ 00:08:20.078 12:14:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:20.078 12:14:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:20.078 12:14:51 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.078 12:14:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.078 12:14:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:20.078 ************************************ 00:08:20.078 START TEST cpu_locks 00:08:20.078 ************************************ 00:08:20.078 12:14:51 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:20.367 * Looking for test storage... 00:08:20.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:20.367 12:14:51 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.367 12:14:51 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.368 12:14:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.368 --rc genhtml_branch_coverage=1 00:08:20.368 --rc genhtml_function_coverage=1 00:08:20.368 --rc genhtml_legend=1 00:08:20.368 --rc geninfo_all_blocks=1 00:08:20.368 --rc geninfo_unexecuted_blocks=1 00:08:20.368 00:08:20.368 ' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.368 --rc genhtml_branch_coverage=1 00:08:20.368 --rc genhtml_function_coverage=1 00:08:20.368 --rc genhtml_legend=1 00:08:20.368 --rc geninfo_all_blocks=1 00:08:20.368 --rc geninfo_unexecuted_blocks=1 00:08:20.368 00:08:20.368 ' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.368 --rc genhtml_branch_coverage=1 00:08:20.368 --rc genhtml_function_coverage=1 00:08:20.368 --rc genhtml_legend=1 00:08:20.368 --rc geninfo_all_blocks=1 00:08:20.368 --rc geninfo_unexecuted_blocks=1 00:08:20.368 00:08:20.368 ' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.368 --rc genhtml_branch_coverage=1 00:08:20.368 --rc genhtml_function_coverage=1 00:08:20.368 --rc genhtml_legend=1 00:08:20.368 --rc geninfo_all_blocks=1 00:08:20.368 --rc geninfo_unexecuted_blocks=1 00:08:20.368 00:08:20.368 ' 00:08:20.368 12:14:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:20.368 12:14:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:20.368 12:14:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:20.368 12:14:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.368 12:14:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.368 ************************************ 00:08:20.368 START TEST default_locks 00:08:20.368 ************************************ 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4173522 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4173522 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 4173522 ']' 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.368 12:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.368 [2024-11-06 12:14:51.907232] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:20.368 [2024-11-06 12:14:51.907289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173522 ] 00:08:20.673 [2024-11-06 12:14:52.001783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.673 [2024-11-06 12:14:52.052097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:21.014 lslocks: write error 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 4173522 ']' 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4173522' 00:08:21.014 killing process with pid 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 4173522 00:08:21.014 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 4173522 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4173522 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4173522 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 4173522 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 4173522 ']' 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (4173522) - No such process 00:08:21.306 ERROR: process (pid: 4173522) is no longer running 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:21.306 00:08:21.306 real 0m0.915s 00:08:21.306 user 0m0.906s 00:08:21.306 sys 0m0.396s 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.306 12:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.306 ************************************ 00:08:21.306 END TEST default_locks 00:08:21.306 ************************************ 00:08:21.306 12:14:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:21.306 12:14:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.306 12:14:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.306 12:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.306 ************************************ 00:08:21.306 START TEST default_locks_via_rpc 00:08:21.306 ************************************ 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4173816 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4173816 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 4173816 ']' 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.306 12:14:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:21.306 [2024-11-06 12:14:52.880142] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:21.306 [2024-11-06 12:14:52.880195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173816 ] 00:08:21.565 [2024-11-06 12:14:52.972966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.565 [2024-11-06 12:14:53.022896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 4173816 ']' 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4173816' 00:08:21.824 killing process with pid 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 4173816 00:08:21.824 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 4173816 00:08:22.391 00:08:22.391 real 0m0.940s 00:08:22.391 user 0m0.939s 00:08:22.391 sys 0m0.418s 00:08:22.391 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.391 12:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.391 ************************************ 00:08:22.391 END TEST default_locks_via_rpc 00:08:22.391 ************************************ 00:08:22.391 12:14:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:22.391 12:14:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:22.391 12:14:53 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.391 12:14:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.391 ************************************ 00:08:22.391 START TEST non_locking_app_on_locked_coremask 00:08:22.391 ************************************ 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4174104 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4174104 /var/tmp/spdk.sock 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4174104 ']' 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.391 12:14:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.391 [2024-11-06 12:14:53.875062] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:22.391 [2024-11-06 12:14:53.875115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174104 ] 00:08:22.391 [2024-11-06 12:14:53.967037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.651 [2024-11-06 12:14:54.017159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4174108 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4174108 /var/tmp/spdk2.sock 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4174108 ']' 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.651 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.911 [2024-11-06 12:14:54.279236] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:22.911 [2024-11-06 12:14:54.279281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174108 ] 00:08:22.911 [2024-11-06 12:14:54.398754] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.911 [2024-11-06 12:14:54.398782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.911 [2024-11-06 12:14:54.499533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.479 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.479 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:23.479 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4174104 00:08:23.479 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4174104 00:08:23.479 12:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:24.047 lslocks: write error 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4174104 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 4174104 ']' 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 4174104 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4174104 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4174104' 00:08:24.047 killing process with pid 4174104 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 4174104 00:08:24.047 12:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 4174104 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4174108 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 4174108 ']' 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 4174108 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4174108 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4174108' 00:08:24.984 killing process with pid 4174108 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 4174108 00:08:24.984 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 4174108 00:08:25.243 00:08:25.243 real 0m2.835s 00:08:25.243 user 0m2.946s 00:08:25.243 sys 0m1.034s 00:08:25.243 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.243 12:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.243 ************************************ 00:08:25.243 END TEST non_locking_app_on_locked_coremask 00:08:25.243 ************************************ 00:08:25.243 12:14:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:25.243 12:14:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:25.243 12:14:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.243 12:14:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:25.243 ************************************ 00:08:25.243 START TEST locking_app_on_unlocked_coremask 00:08:25.243 ************************************ 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4174660 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4174660 /var/tmp/spdk.sock 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4174660 ']' 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.243 12:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:25.243 [2024-11-06 12:14:56.770985] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:25.243 [2024-11-06 12:14:56.771040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174660 ] 00:08:25.502 [2024-11-06 12:14:56.863936] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:25.502 [2024-11-06 12:14:56.863968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.502 [2024-11-06 12:14:56.914079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4174670 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4174670 /var/tmp/spdk2.sock 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4174670 ']' 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.762 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.762 [2024-11-06 12:14:57.176624] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:25.762 [2024-11-06 12:14:57.176669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174670 ] 00:08:25.762 [2024-11-06 12:14:57.299491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.020 [2024-11-06 12:14:57.396540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.279 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.279 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:26.279 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4174670 00:08:26.279 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4174670 00:08:26.279 12:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.537 lslocks: write error 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4174660 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 4174660 ']' 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 4174660 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4174660 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4174660' 00:08:26.537 killing process with pid 4174660 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 4174660 00:08:26.537 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 4174660 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4174670 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 4174670 ']' 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 4174670 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4174670 00:08:27.474 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.475 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.475 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4174670' 00:08:27.475 killing process with pid 4174670 00:08:27.475 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 4174670 00:08:27.475 12:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 4174670 00:08:27.734 00:08:27.734 real 0m2.455s 00:08:27.734 user 0m2.542s 00:08:27.734 sys 0m0.840s 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.734 ************************************ 00:08:27.734 END TEST locking_app_on_unlocked_coremask 00:08:27.734 ************************************ 00:08:27.734 12:14:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:27.734 12:14:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:27.734 12:14:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.734 12:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.734 ************************************ 00:08:27.734 START TEST locking_app_on_locked_coremask 00:08:27.734 ************************************ 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4175062 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4175062 /var/tmp/spdk.sock 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4175062 ']' 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.734 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.734 [2024-11-06 12:14:59.285968] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:27.734 [2024-11-06 12:14:59.286022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175062 ] 00:08:27.994 [2024-11-06 12:14:59.378839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.994 [2024-11-06 12:14:59.428681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4175223 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4175223 /var/tmp/spdk2.sock 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4175223 /var/tmp/spdk2.sock 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4175223 /var/tmp/spdk2.sock 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 4175223 ']' 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.252 12:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.252 [2024-11-06 12:14:59.691604] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:28.252 [2024-11-06 12:14:59.691649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175223 ] 00:08:28.252 [2024-11-06 12:14:59.810294] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4175062 has claimed it. 00:08:28.252 [2024-11-06 12:14:59.810337] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:28.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (4175223) - No such process 00:08:28.821 ERROR: process (pid: 4175223) is no longer running 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4175062 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4175062 00:08:28.821 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:29.080 lslocks: write error 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4175062 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 4175062 ']' 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 4175062 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4175062 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4175062' 00:08:29.080 killing process with pid 4175062 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 4175062 00:08:29.080 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 4175062 00:08:29.339 00:08:29.339 real 0m1.691s 00:08:29.339 user 0m1.894s 00:08:29.339 sys 0m0.533s 00:08:29.339 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.339 12:15:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.339 ************************************ 00:08:29.339 END TEST locking_app_on_locked_coremask 00:08:29.339 ************************************ 00:08:29.339 12:15:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:29.339 12:15:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.339 12:15:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.339 12:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.598 ************************************ 00:08:29.598 START TEST locking_overlapped_coremask 00:08:29.598 ************************************ 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4175575 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4175575 /var/tmp/spdk.sock 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 4175575 ']' 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.598 12:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.598 [2024-11-06 12:15:01.034343] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:29.598 [2024-11-06 12:15:01.034382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175575 ] 00:08:29.598 [2024-11-06 12:15:01.117348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.598 [2024-11-06 12:15:01.172031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.598 [2024-11-06 12:15:01.172137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.598 [2024-11-06 12:15:01.172137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4175647 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4175647 /var/tmp/spdk2.sock 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4175647 /var/tmp/spdk2.sock 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4175647 /var/tmp/spdk2.sock 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 4175647 ']' 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:30.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.535 12:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.535 [2024-11-06 12:15:01.978011] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:30.535 [2024-11-06 12:15:01.978077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175647 ] 00:08:30.535 [2024-11-06 12:15:02.073503] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4175575 has claimed it. 00:08:30.535 [2024-11-06 12:15:02.073543] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:31.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (4175647) - No such process 00:08:31.102 ERROR: process (pid: 4175647) is no longer running 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4175575 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 4175575 ']' 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 4175575 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.102 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4175575 00:08:31.361 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.361 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.361 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4175575' 00:08:31.361 killing process with pid 4175575 00:08:31.361 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 4175575 00:08:31.361 12:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 4175575 00:08:31.623 00:08:31.623 real 0m2.108s 00:08:31.623 user 0m6.202s 00:08:31.623 sys 0m0.439s 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 ************************************ 00:08:31.623 END TEST locking_overlapped_coremask 00:08:31.623 ************************************ 00:08:31.623 12:15:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:31.623 12:15:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:31.623 12:15:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.623 12:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 ************************************ 00:08:31.623 START TEST locking_overlapped_coremask_via_rpc 00:08:31.623 ************************************ 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4175953 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4175953 /var/tmp/spdk.sock 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 4175953 ']' 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.623 12:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.623 [2024-11-06 12:15:03.218022] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:31.623 [2024-11-06 12:15:03.218080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175953 ] 00:08:31.882 [2024-11-06 12:15:03.312009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:31.882 [2024-11-06 12:15:03.312041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.882 [2024-11-06 12:15:03.363346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.882 [2024-11-06 12:15:03.363448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.882 [2024-11-06 12:15:03.363449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4176217 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4176217 /var/tmp/spdk2.sock 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 4176217 ']' 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.819 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 [2024-11-06 12:15:04.160682] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:32.819 [2024-11-06 12:15:04.160745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176217 ] 00:08:32.819 [2024-11-06 12:15:04.257428] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:32.819 [2024-11-06 12:15:04.257456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.819 [2024-11-06 12:15:04.338443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.819 [2024-11-06 12:15:04.341480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.819 [2024-11-06 12:15:04.341482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.386 [2024-11-06 12:15:04.745532] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4175953 has claimed it. 00:08:33.386 request: 00:08:33.386 { 00:08:33.386 "method": "framework_enable_cpumask_locks", 00:08:33.386 "req_id": 1 00:08:33.386 } 00:08:33.386 Got JSON-RPC error response 00:08:33.386 response: 00:08:33.386 { 00:08:33.386 "code": -32603, 00:08:33.386 "message": "Failed to claim CPU core: 2" 00:08:33.386 } 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4175953 /var/tmp/spdk.sock 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 4175953 ']' 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.386 12:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4176217 /var/tmp/spdk2.sock 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 4176217 ']' 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:33.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.645 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:33.905 00:08:33.905 real 0m2.177s 00:08:33.905 user 0m1.123s 00:08:33.905 sys 0m0.151s 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.905 12:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.905 ************************************ 00:08:33.905 END TEST locking_overlapped_coremask_via_rpc 00:08:33.905 ************************************ 00:08:33.905 12:15:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:33.905 12:15:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4175953 ]] 00:08:33.905 12:15:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4175953 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 4175953 ']' 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 4175953 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4175953 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4175953' 00:08:33.905 killing process with pid 4175953 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 4175953 00:08:33.905 12:15:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 4175953 00:08:34.164 12:15:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4176217 ]] 00:08:34.164 12:15:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4176217 00:08:34.164 12:15:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 4176217 ']' 00:08:34.164 12:15:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 4176217 00:08:34.164 12:15:05 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:34.164 12:15:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.164 12:15:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4176217 00:08:34.423 12:15:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:34.423 12:15:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:34.423 12:15:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4176217' 00:08:34.423 killing process with pid 4176217 00:08:34.423 12:15:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 4176217 00:08:34.423 12:15:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 4176217 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4175953 ]] 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4175953 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 4175953 ']' 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 4175953 00:08:34.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4175953) - No such process 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 4175953 is not found' 00:08:34.683 Process with pid 4175953 is not found 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4176217 ]] 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4176217 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 4176217 ']' 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 4176217 00:08:34.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4176217) - No such process 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 4176217 is not found' 00:08:34.683 Process with pid 4176217 is not found 00:08:34.683 12:15:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.683 00:08:34.683 real 0m14.513s 00:08:34.683 user 0m27.597s 00:08:34.683 sys 0m4.799s 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.683 12:15:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.683 ************************************ 00:08:34.683 END TEST cpu_locks 00:08:34.683 ************************************ 00:08:34.683 00:08:34.683 real 0m39.062s 00:08:34.683 user 1m18.450s 00:08:34.683 sys 0m8.970s 00:08:34.683 12:15:06 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.683 12:15:06 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.683 ************************************ 00:08:34.683 END TEST event 00:08:34.683 ************************************ 00:08:34.683 12:15:06 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:34.683 12:15:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.683 12:15:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.683 12:15:06 -- common/autotest_common.sh@10 -- # set +x 00:08:34.683 ************************************ 00:08:34.683 START TEST thread 00:08:34.683 ************************************ 00:08:34.683 12:15:06 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:34.942 * Looking for test storage... 00:08:34.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:34.942 12:15:06 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.942 12:15:06 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.942 12:15:06 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.942 12:15:06 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.942 12:15:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.942 12:15:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.942 12:15:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.942 12:15:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.942 12:15:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.943 12:15:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.943 12:15:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.943 12:15:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.943 12:15:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.943 12:15:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.943 12:15:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.943 12:15:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:34.943 12:15:06 thread -- scripts/common.sh@345 -- # : 1 00:08:34.943 12:15:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.943 12:15:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.943 12:15:06 thread -- scripts/common.sh@365 -- # decimal 1 00:08:34.943 12:15:06 thread -- scripts/common.sh@353 -- # local d=1 00:08:34.943 12:15:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.943 12:15:06 thread -- scripts/common.sh@355 -- # echo 1 00:08:34.943 12:15:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.943 12:15:06 thread -- scripts/common.sh@366 -- # decimal 2 00:08:34.943 12:15:06 thread -- scripts/common.sh@353 -- # local d=2 00:08:34.943 12:15:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.943 12:15:06 thread -- scripts/common.sh@355 -- # echo 2 00:08:34.943 12:15:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.943 12:15:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.943 12:15:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.943 12:15:06 thread -- scripts/common.sh@368 -- # return 0 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.943 --rc genhtml_branch_coverage=1 00:08:34.943 --rc genhtml_function_coverage=1 00:08:34.943 --rc genhtml_legend=1 00:08:34.943 --rc geninfo_all_blocks=1 00:08:34.943 --rc geninfo_unexecuted_blocks=1 00:08:34.943 00:08:34.943 ' 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.943 --rc genhtml_branch_coverage=1 00:08:34.943 --rc genhtml_function_coverage=1 00:08:34.943 --rc genhtml_legend=1 00:08:34.943 --rc geninfo_all_blocks=1 00:08:34.943 --rc geninfo_unexecuted_blocks=1 00:08:34.943 00:08:34.943 ' 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.943 --rc genhtml_branch_coverage=1 00:08:34.943 --rc genhtml_function_coverage=1 00:08:34.943 --rc genhtml_legend=1 00:08:34.943 --rc geninfo_all_blocks=1 00:08:34.943 --rc geninfo_unexecuted_blocks=1 00:08:34.943 00:08:34.943 ' 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.943 --rc genhtml_branch_coverage=1 00:08:34.943 --rc genhtml_function_coverage=1 00:08:34.943 --rc genhtml_legend=1 00:08:34.943 --rc geninfo_all_blocks=1 00:08:34.943 --rc geninfo_unexecuted_blocks=1 00:08:34.943 00:08:34.943 ' 00:08:34.943 12:15:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.943 12:15:06 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.943 ************************************ 00:08:34.943 START TEST thread_poller_perf 00:08:34.943 ************************************ 00:08:34.943 12:15:06 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.943 [2024-11-06 12:15:06.493135] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:34.943 [2024-11-06 12:15:06.493205] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176719 ] 00:08:35.202 [2024-11-06 12:15:06.587887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.202 [2024-11-06 12:15:06.638334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.202 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:36.138 [2024-11-06T11:15:07.753Z] ====================================== 00:08:36.138 [2024-11-06T11:15:07.753Z] busy:2209996738 (cyc) 00:08:36.138 [2024-11-06T11:15:07.753Z] total_run_count: 256000 00:08:36.138 [2024-11-06T11:15:07.753Z] tsc_hz: 2200000000 (cyc) 00:08:36.138 [2024-11-06T11:15:07.753Z] ====================================== 00:08:36.138 [2024-11-06T11:15:07.753Z] poller_cost: 8632 (cyc), 3923 (nsec) 00:08:36.138 00:08:36.138 real 0m1.221s 00:08:36.138 user 0m1.127s 00:08:36.138 sys 0m0.089s 00:08:36.138 12:15:07 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.138 12:15:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.138 ************************************ 00:08:36.138 END TEST thread_poller_perf 00:08:36.138 ************************************ 00:08:36.138 12:15:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:36.138 12:15:07 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:36.138 12:15:07 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.138 12:15:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.138 ************************************ 00:08:36.138 START TEST thread_poller_perf 00:08:36.138 ************************************ 00:08:36.138 12:15:07 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:36.397 [2024-11-06 12:15:07.758196] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:36.397 [2024-11-06 12:15:07.758272] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176893 ] 00:08:36.397 [2024-11-06 12:15:07.852738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.397 [2024-11-06 12:15:07.900889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.397 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:37.333 [2024-11-06T11:15:08.948Z] ====================================== 00:08:37.333 [2024-11-06T11:15:08.948Z] busy:2202436298 (cyc) 00:08:37.333 [2024-11-06T11:15:08.948Z] total_run_count: 3363000 00:08:37.333 [2024-11-06T11:15:08.948Z] tsc_hz: 2200000000 (cyc) 00:08:37.333 [2024-11-06T11:15:08.948Z] ====================================== 00:08:37.333 [2024-11-06T11:15:08.948Z] poller_cost: 654 (cyc), 297 (nsec) 00:08:37.333 00:08:37.333 real 0m1.210s 00:08:37.333 user 0m1.121s 00:08:37.333 sys 0m0.083s 00:08:37.333 12:15:08 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.333 12:15:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:37.333 ************************************ 00:08:37.333 END TEST thread_poller_perf 00:08:37.333 ************************************ 00:08:37.592 12:15:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:37.592 00:08:37.592 real 0m2.725s 00:08:37.592 user 0m2.406s 00:08:37.592 sys 0m0.328s 00:08:37.592 12:15:08 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.592 12:15:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.592 ************************************ 00:08:37.592 END TEST thread 00:08:37.592 ************************************ 00:08:37.592 12:15:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:37.592 12:15:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.592 12:15:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:37.592 12:15:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.592 12:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:37.592 ************************************ 00:08:37.592 START TEST app_cmdline 00:08:37.592 ************************************ 00:08:37.592 12:15:09 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.592 * Looking for test storage... 00:08:37.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:37.592 12:15:09 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.592 12:15:09 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.592 12:15:09 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.851 12:15:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.851 --rc genhtml_branch_coverage=1 00:08:37.851 --rc genhtml_function_coverage=1 00:08:37.851 --rc genhtml_legend=1 00:08:37.851 --rc geninfo_all_blocks=1 00:08:37.851 --rc geninfo_unexecuted_blocks=1 00:08:37.851 00:08:37.851 ' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.851 --rc genhtml_branch_coverage=1 00:08:37.851 --rc genhtml_function_coverage=1 00:08:37.851 --rc genhtml_legend=1 00:08:37.851 --rc geninfo_all_blocks=1 00:08:37.851 --rc geninfo_unexecuted_blocks=1 00:08:37.851 00:08:37.851 ' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.851 --rc genhtml_branch_coverage=1 00:08:37.851 --rc genhtml_function_coverage=1 00:08:37.851 --rc genhtml_legend=1 00:08:37.851 --rc geninfo_all_blocks=1 00:08:37.851 --rc geninfo_unexecuted_blocks=1 00:08:37.851 00:08:37.851 ' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.851 --rc genhtml_branch_coverage=1 00:08:37.851 --rc genhtml_function_coverage=1 00:08:37.851 --rc genhtml_legend=1 00:08:37.851 --rc geninfo_all_blocks=1 00:08:37.851 --rc geninfo_unexecuted_blocks=1 00:08:37.851 00:08:37.851 ' 00:08:37.851 12:15:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:37.851 12:15:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4177552 00:08:37.851 12:15:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4177552 00:08:37.851 12:15:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 4177552 ']' 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.851 12:15:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.851 [2024-11-06 12:15:09.280687] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:37.851 [2024-11-06 12:15:09.280737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177552 ] 00:08:37.851 [2024-11-06 12:15:09.361166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.851 [2024-11-06 12:15:09.411399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.789 12:15:10 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.789 12:15:10 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:38.790 12:15:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.048 { 00:08:39.048 "version": "SPDK v25.01-pre git sha1 81757caea", 00:08:39.048 "fields": { 00:08:39.048 "major": 25, 00:08:39.048 "minor": 1, 00:08:39.048 "patch": 0, 00:08:39.048 "suffix": "-pre", 00:08:39.048 "commit": "81757caea" 00:08:39.048 } 00:08:39.048 } 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.048 12:15:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.048 12:15:10 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.307 request: 00:08:39.307 { 00:08:39.307 "method": "env_dpdk_get_mem_stats", 00:08:39.307 "req_id": 1 00:08:39.307 } 00:08:39.307 Got JSON-RPC error response 00:08:39.307 response: 00:08:39.307 { 00:08:39.307 "code": -32601, 00:08:39.307 "message": "Method not found" 00:08:39.307 } 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.307 12:15:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4177552 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 4177552 ']' 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 4177552 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4177552 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:39.307 12:15:10 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:39.308 12:15:10 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4177552' 00:08:39.308 killing process with pid 4177552 00:08:39.308 12:15:10 app_cmdline -- common/autotest_common.sh@971 -- # kill 4177552 00:08:39.308 12:15:10 app_cmdline -- common/autotest_common.sh@976 -- # wait 4177552 00:08:39.564 00:08:39.564 real 0m2.112s 00:08:39.564 user 0m2.665s 00:08:39.564 sys 0m0.514s 00:08:39.564 12:15:11 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.564 12:15:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.564 ************************************ 00:08:39.564 END TEST app_cmdline 00:08:39.564 ************************************ 00:08:39.823 12:15:11 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:39.823 12:15:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.823 12:15:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.823 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 ************************************ 00:08:39.823 START TEST version 00:08:39.823 ************************************ 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:39.823 * Looking for test storage... 00:08:39.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.823 12:15:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.823 12:15:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.823 12:15:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.823 12:15:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.823 12:15:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.823 12:15:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.823 12:15:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.823 12:15:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.823 12:15:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.823 12:15:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.823 12:15:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.823 12:15:11 version -- scripts/common.sh@344 -- # case "$op" in 00:08:39.823 12:15:11 version -- scripts/common.sh@345 -- # : 1 00:08:39.823 12:15:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.823 12:15:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.823 12:15:11 version -- scripts/common.sh@365 -- # decimal 1 00:08:39.823 12:15:11 version -- scripts/common.sh@353 -- # local d=1 00:08:39.823 12:15:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.823 12:15:11 version -- scripts/common.sh@355 -- # echo 1 00:08:39.823 12:15:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.823 12:15:11 version -- scripts/common.sh@366 -- # decimal 2 00:08:39.823 12:15:11 version -- scripts/common.sh@353 -- # local d=2 00:08:39.823 12:15:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.823 12:15:11 version -- scripts/common.sh@355 -- # echo 2 00:08:39.823 12:15:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.823 12:15:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.823 12:15:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.823 12:15:11 version -- scripts/common.sh@368 -- # return 0 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.823 --rc genhtml_branch_coverage=1 00:08:39.823 --rc genhtml_function_coverage=1 00:08:39.823 --rc genhtml_legend=1 00:08:39.823 --rc geninfo_all_blocks=1 00:08:39.823 --rc geninfo_unexecuted_blocks=1 00:08:39.823 00:08:39.823 ' 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.823 --rc genhtml_branch_coverage=1 00:08:39.823 --rc genhtml_function_coverage=1 00:08:39.823 --rc genhtml_legend=1 00:08:39.823 --rc geninfo_all_blocks=1 00:08:39.823 --rc geninfo_unexecuted_blocks=1 00:08:39.823 00:08:39.823 ' 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.823 --rc genhtml_branch_coverage=1 00:08:39.823 --rc genhtml_function_coverage=1 00:08:39.823 --rc genhtml_legend=1 00:08:39.823 --rc geninfo_all_blocks=1 00:08:39.823 --rc geninfo_unexecuted_blocks=1 00:08:39.823 00:08:39.823 ' 00:08:39.823 12:15:11 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.823 --rc genhtml_branch_coverage=1 00:08:39.823 --rc genhtml_function_coverage=1 00:08:39.823 --rc genhtml_legend=1 00:08:39.823 --rc geninfo_all_blocks=1 00:08:39.823 --rc geninfo_unexecuted_blocks=1 00:08:39.823 00:08:39.823 ' 00:08:39.823 12:15:11 version -- app/version.sh@17 -- # get_header_version major 00:08:39.823 12:15:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # cut -f2 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.824 12:15:11 version -- app/version.sh@17 -- # major=25 00:08:39.824 12:15:11 version -- app/version.sh@18 -- # get_header_version minor 00:08:39.824 12:15:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # cut -f2 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.824 12:15:11 version -- app/version.sh@18 -- # minor=1 00:08:39.824 12:15:11 version -- app/version.sh@19 -- # get_header_version patch 00:08:39.824 12:15:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # cut -f2 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.824 12:15:11 version -- app/version.sh@19 -- # patch=0 00:08:39.824 12:15:11 version -- app/version.sh@20 -- # get_header_version suffix 00:08:39.824 12:15:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # cut -f2 00:08:39.824 12:15:11 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.824 12:15:11 version -- app/version.sh@20 -- # suffix=-pre 00:08:39.824 12:15:11 version -- app/version.sh@22 -- # version=25.1 00:08:39.824 12:15:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:39.824 12:15:11 version -- app/version.sh@28 -- # version=25.1rc0 00:08:39.824 12:15:11 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:39.824 12:15:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:40.083 12:15:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:40.083 12:15:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:40.083 00:08:40.083 real 0m0.258s 00:08:40.083 user 0m0.170s 00:08:40.083 sys 0m0.132s 00:08:40.083 12:15:11 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.083 12:15:11 version -- common/autotest_common.sh@10 -- # set +x 00:08:40.083 ************************************ 00:08:40.083 END TEST version 00:08:40.083 ************************************ 00:08:40.083 12:15:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:40.083 12:15:11 -- spdk/autotest.sh@194 -- # uname -s 00:08:40.083 12:15:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:40.083 12:15:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:40.083 12:15:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:40.083 12:15:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:40.083 12:15:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.083 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.083 12:15:11 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:40.083 12:15:11 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:40.083 12:15:11 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:40.083 12:15:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:40.083 12:15:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.083 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.083 ************************************ 00:08:40.083 START TEST nvmf_tcp 00:08:40.083 ************************************ 00:08:40.083 12:15:11 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:40.083 * Looking for test storage... 00:08:40.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:40.083 12:15:11 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.083 12:15:11 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.083 12:15:11 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.342 12:15:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.342 --rc genhtml_branch_coverage=1 00:08:40.342 --rc genhtml_function_coverage=1 00:08:40.342 --rc genhtml_legend=1 00:08:40.342 --rc geninfo_all_blocks=1 00:08:40.342 --rc geninfo_unexecuted_blocks=1 00:08:40.342 00:08:40.342 ' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.342 --rc genhtml_branch_coverage=1 00:08:40.342 --rc genhtml_function_coverage=1 00:08:40.342 --rc genhtml_legend=1 00:08:40.342 --rc geninfo_all_blocks=1 00:08:40.342 --rc geninfo_unexecuted_blocks=1 00:08:40.342 00:08:40.342 ' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.342 --rc genhtml_branch_coverage=1 00:08:40.342 --rc genhtml_function_coverage=1 00:08:40.342 --rc genhtml_legend=1 00:08:40.342 --rc geninfo_all_blocks=1 00:08:40.342 --rc geninfo_unexecuted_blocks=1 00:08:40.342 00:08:40.342 ' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.342 --rc genhtml_branch_coverage=1 00:08:40.342 --rc genhtml_function_coverage=1 00:08:40.342 --rc genhtml_legend=1 00:08:40.342 --rc geninfo_all_blocks=1 00:08:40.342 --rc geninfo_unexecuted_blocks=1 00:08:40.342 00:08:40.342 ' 00:08:40.342 12:15:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:40.342 12:15:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:40.342 12:15:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.342 12:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.342 ************************************ 00:08:40.342 START TEST nvmf_target_core 00:08:40.342 ************************************ 00:08:40.342 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:40.342 * Looking for test storage... 00:08:40.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:40.342 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.342 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.342 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.601 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.602 --rc genhtml_branch_coverage=1 00:08:40.602 --rc genhtml_function_coverage=1 00:08:40.602 --rc genhtml_legend=1 00:08:40.602 --rc geninfo_all_blocks=1 00:08:40.602 --rc geninfo_unexecuted_blocks=1 00:08:40.602 00:08:40.602 ' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.602 --rc genhtml_branch_coverage=1 00:08:40.602 --rc genhtml_function_coverage=1 00:08:40.602 --rc genhtml_legend=1 00:08:40.602 --rc geninfo_all_blocks=1 00:08:40.602 --rc geninfo_unexecuted_blocks=1 00:08:40.602 00:08:40.602 ' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.602 --rc genhtml_branch_coverage=1 00:08:40.602 --rc genhtml_function_coverage=1 00:08:40.602 --rc genhtml_legend=1 00:08:40.602 --rc geninfo_all_blocks=1 00:08:40.602 --rc geninfo_unexecuted_blocks=1 00:08:40.602 00:08:40.602 ' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.602 --rc genhtml_branch_coverage=1 00:08:40.602 --rc genhtml_function_coverage=1 00:08:40.602 --rc genhtml_legend=1 00:08:40.602 --rc geninfo_all_blocks=1 00:08:40.602 --rc geninfo_unexecuted_blocks=1 00:08:40.602 00:08:40.602 ' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.602 12:15:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.602 ************************************ 00:08:40.602 START TEST nvmf_abort 00:08:40.602 ************************************ 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:40.602 * Looking for test storage... 00:08:40.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.602 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.861 --rc genhtml_branch_coverage=1 00:08:40.861 --rc genhtml_function_coverage=1 00:08:40.861 --rc genhtml_legend=1 00:08:40.861 --rc geninfo_all_blocks=1 00:08:40.861 --rc geninfo_unexecuted_blocks=1 00:08:40.861 00:08:40.861 ' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.861 --rc genhtml_branch_coverage=1 00:08:40.861 --rc genhtml_function_coverage=1 00:08:40.861 --rc genhtml_legend=1 00:08:40.861 --rc geninfo_all_blocks=1 00:08:40.861 --rc geninfo_unexecuted_blocks=1 00:08:40.861 00:08:40.861 ' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.861 --rc genhtml_branch_coverage=1 00:08:40.861 --rc genhtml_function_coverage=1 00:08:40.861 --rc genhtml_legend=1 00:08:40.861 --rc geninfo_all_blocks=1 00:08:40.861 --rc geninfo_unexecuted_blocks=1 00:08:40.861 00:08:40.861 ' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.861 --rc genhtml_branch_coverage=1 00:08:40.861 --rc genhtml_function_coverage=1 00:08:40.861 --rc genhtml_legend=1 00:08:40.861 --rc geninfo_all_blocks=1 00:08:40.861 --rc geninfo_unexecuted_blocks=1 00:08:40.861 00:08:40.861 ' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.861 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.862 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.428 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:47.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:47.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:47.429 Found net devices under 0000:af:00.0: cvl_0_0 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:47.429 Found net devices under 0000:af:00.1: cvl_0_1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.429 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:08:47.429 00:08:47.429 --- 10.0.0.2 ping statistics --- 00:08:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.429 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:47.429 00:08:47.429 --- 10.0.0.1 ping statistics --- 00:08:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.429 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4181706 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4181706 00:08:47.429 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 4181706 ']' 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 [2024-11-06 12:15:18.177011] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:47.430 [2024-11-06 12:15:18.177073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.430 [2024-11-06 12:15:18.249258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:47.430 [2024-11-06 12:15:18.290516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.430 [2024-11-06 12:15:18.290553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.430 [2024-11-06 12:15:18.290560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.430 [2024-11-06 12:15:18.290565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.430 [2024-11-06 12:15:18.290572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.430 [2024-11-06 12:15:18.292041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.430 [2024-11-06 12:15:18.292136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.430 [2024-11-06 12:15:18.292137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 [2024-11-06 12:15:18.438666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 Malloc0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 Delay0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 [2024-11-06 12:15:18.503696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.430 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:47.430 [2024-11-06 12:15:18.638229] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:49.336 Initializing NVMe Controllers 00:08:49.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:49.336 controller IO queue size 128 less than required 00:08:49.336 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:49.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:49.336 Initialization complete. Launching workers. 00:08:49.336 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 23681 00:08:49.336 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23746, failed to submit 62 00:08:49.336 success 23685, unsuccessful 61, failed 0 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.336 rmmod nvme_tcp 00:08:49.336 rmmod nvme_fabrics 00:08:49.336 rmmod nvme_keyring 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4181706 ']' 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4181706 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 4181706 ']' 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 4181706 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:08:49.336 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4181706 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4181706' 00:08:49.337 killing process with pid 4181706 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 4181706 00:08:49.337 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 4181706 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.596 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.503 00:08:51.503 real 0m11.004s 00:08:51.503 user 0m11.575s 00:08:51.503 sys 0m5.207s 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 ************************************ 00:08:51.503 END TEST nvmf_abort 00:08:51.503 ************************************ 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.503 12:15:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.763 ************************************ 00:08:51.763 START TEST nvmf_ns_hotplug_stress 00:08:51.763 ************************************ 00:08:51.763 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:51.763 * Looking for test storage... 00:08:51.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.764 --rc genhtml_branch_coverage=1 00:08:51.764 --rc genhtml_function_coverage=1 00:08:51.764 --rc genhtml_legend=1 00:08:51.764 --rc geninfo_all_blocks=1 00:08:51.764 --rc geninfo_unexecuted_blocks=1 00:08:51.764 00:08:51.764 ' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.764 --rc genhtml_branch_coverage=1 00:08:51.764 --rc genhtml_function_coverage=1 00:08:51.764 --rc genhtml_legend=1 00:08:51.764 --rc geninfo_all_blocks=1 00:08:51.764 --rc geninfo_unexecuted_blocks=1 00:08:51.764 00:08:51.764 ' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.764 --rc genhtml_branch_coverage=1 00:08:51.764 --rc genhtml_function_coverage=1 00:08:51.764 --rc genhtml_legend=1 00:08:51.764 --rc geninfo_all_blocks=1 00:08:51.764 --rc geninfo_unexecuted_blocks=1 00:08:51.764 00:08:51.764 ' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.764 --rc genhtml_branch_coverage=1 00:08:51.764 --rc genhtml_function_coverage=1 00:08:51.764 --rc genhtml_legend=1 00:08:51.764 --rc geninfo_all_blocks=1 00:08:51.764 --rc geninfo_unexecuted_blocks=1 00:08:51.764 00:08:51.764 ' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.764 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.042 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.042 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.043 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.043 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.043 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.043 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.302 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.302 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.302 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.302 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:08:57.302 00:08:57.302 --- 10.0.0.2 ping statistics --- 00:08:57.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.302 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:08:57.302 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:57.303 00:08:57.303 --- 10.0.0.1 ping statistics --- 00:08:57.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.303 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4185755 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4185755 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 4185755 ']' 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.303 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.303 [2024-11-06 12:15:28.815810] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:08:57.303 [2024-11-06 12:15:28.815868] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.303 [2024-11-06 12:15:28.886881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.562 [2024-11-06 12:15:28.927378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.562 [2024-11-06 12:15:28.927410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.562 [2024-11-06 12:15:28.927418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.562 [2024-11-06 12:15:28.927423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.562 [2024-11-06 12:15:28.927428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.562 [2024-11-06 12:15:28.928913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.562 [2024-11-06 12:15:28.928988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.562 [2024-11-06 12:15:28.928990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:57.562 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.821 [2024-11-06 12:15:29.324187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.821 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.080 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.338 [2024-11-06 12:15:29.862067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.338 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.596 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:58.855 Malloc0 00:08:58.855 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.113 Delay0 00:08:59.113 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.372 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:59.630 NULL1 00:08:59.630 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:59.630 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4186302 00:08:59.631 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:59.631 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:08:59.631 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.007 Read completed with error (sct=0, sc=11) 00:09:01.007 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.007 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:01.007 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:01.267 true 00:09:01.267 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:01.267 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.203 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.461 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:02.461 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:02.720 true 00:09:02.720 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:02.720 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.981 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.239 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:03.239 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:03.498 true 00:09:03.498 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:03.498 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.757 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.016 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:04.016 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:04.274 true 00:09:04.274 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:04.274 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.210 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.468 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:05.469 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:05.727 true 00:09:05.727 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:05.727 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.985 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.244 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:06.244 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:06.503 true 00:09:06.503 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:06.503 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.439 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.439 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:07.439 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:07.698 true 00:09:07.698 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:07.698 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.956 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.227 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:08.227 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:08.489 true 00:09:08.747 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:08.748 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.685 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.685 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:09.685 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:09.944 true 00:09:09.944 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:09.944 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.203 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.462 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:10.462 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:10.721 true 00:09:10.721 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:10.721 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.657 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.916 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:11.916 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:12.174 true 00:09:12.174 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:12.174 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.433 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.692 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:12.692 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:12.951 true 00:09:12.951 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:12.951 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.518 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.518 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:13.518 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:13.777 true 00:09:14.035 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:14.035 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.971 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.230 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:15.230 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:15.230 true 00:09:15.230 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:15.230 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.798 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.798 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:15.798 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:16.056 true 00:09:16.056 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:16.056 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.992 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.251 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:17.251 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:17.510 true 00:09:17.510 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:17.510 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.769 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.028 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:18.028 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:18.285 true 00:09:18.285 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:18.285 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.221 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.479 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:19.479 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:19.738 true 00:09:19.738 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:19.738 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.997 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.255 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:20.255 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:20.255 true 00:09:20.255 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:20.255 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.514 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.772 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:20.772 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:21.031 true 00:09:21.031 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:21.031 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.666 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:22.666 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:22.926 true 00:09:22.926 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:22.926 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.493 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.752 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:23.752 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:24.010 true 00:09:24.010 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:24.010 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.269 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.527 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:24.527 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:24.786 true 00:09:24.786 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:24.786 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.723 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.981 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:25.981 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:26.240 true 00:09:26.240 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:26.240 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.499 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.756 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:26.756 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:26.756 true 00:09:26.756 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:26.756 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.325 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.586 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:27.586 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:27.845 true 00:09:27.845 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:27.845 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.781 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.060 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:29.060 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:29.338 true 00:09:29.338 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:29.338 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.661 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.661 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:29.661 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:29.944 Initializing NVMe Controllers 00:09:29.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.944 Controller IO queue size 128, less than required. 00:09:29.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:29.944 Controller IO queue size 128, less than required. 00:09:29.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:29.944 Initialization complete. Launching workers. 00:09:29.944 ======================================================== 00:09:29.944 Latency(us) 00:09:29.944 Device Information : IOPS MiB/s Average min max 00:09:29.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 956.04 0.47 66324.36 2936.75 1086183.17 00:09:29.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13671.43 6.68 9361.78 1885.51 555756.43 00:09:29.944 ======================================================== 00:09:29.944 Total : 14627.48 7.14 13084.82 1885.51 1086183.17 00:09:29.944 00:09:29.944 true 00:09:29.944 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4186302 00:09:29.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4186302) - No such process 00:09:29.944 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4186302 00:09:29.944 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.203 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.462 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:30.462 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:30.462 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:30.462 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.462 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:30.721 null0 00:09:30.980 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:30.980 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:30.980 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:31.239 null1 00:09:31.239 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:31.239 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:31.239 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:31.498 null2 00:09:31.498 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:31.498 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:31.498 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:31.758 null3 00:09:31.758 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:31.758 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:31.758 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:32.016 null4 00:09:32.016 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:32.016 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:32.016 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:32.275 null5 00:09:32.275 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:32.275 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:32.275 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:32.533 null6 00:09:32.533 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:32.533 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:32.533 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:32.793 null7 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:32.793 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4192390 4192392 4192395 4192398 4192401 4192405 4192407 4192409 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.794 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.053 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.311 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.312 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:33.570 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:33.570 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:33.571 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:33.829 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.829 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.829 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:33.829 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.829 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:33.830 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.089 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.348 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.349 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:34.349 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.349 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.349 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.608 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.608 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:34.867 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.126 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.385 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.645 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.903 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:35.904 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.904 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:35.904 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:35.904 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.904 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.163 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.421 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.421 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.421 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.422 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.422 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.681 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.940 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.940 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.940 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.940 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.940 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.941 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.941 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.941 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.941 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.200 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.459 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.459 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.459 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.459 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.459 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.459 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.718 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.976 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.234 rmmod nvme_tcp 00:09:38.234 rmmod nvme_fabrics 00:09:38.234 rmmod nvme_keyring 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4185755 ']' 00:09:38.234 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4185755 00:09:38.491 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 4185755 ']' 00:09:38.491 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 4185755 00:09:38.491 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:09:38.491 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.491 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4185755 00:09:38.492 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:38.492 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:38.492 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4185755' 00:09:38.492 killing process with pid 4185755 00:09:38.492 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 4185755 00:09:38.492 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 4185755 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.492 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.028 00:09:41.028 real 0m49.025s 00:09:41.028 user 3m33.122s 00:09:41.028 sys 0m15.436s 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.028 ************************************ 00:09:41.028 END TEST nvmf_ns_hotplug_stress 00:09:41.028 ************************************ 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.028 ************************************ 00:09:41.028 START TEST nvmf_delete_subsystem 00:09:41.028 ************************************ 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:41.028 * Looking for test storage... 00:09:41.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.028 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:41.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.028 --rc genhtml_branch_coverage=1 00:09:41.028 --rc genhtml_function_coverage=1 00:09:41.028 --rc genhtml_legend=1 00:09:41.028 --rc geninfo_all_blocks=1 00:09:41.028 --rc geninfo_unexecuted_blocks=1 00:09:41.028 00:09:41.028 ' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.029 --rc genhtml_branch_coverage=1 00:09:41.029 --rc genhtml_function_coverage=1 00:09:41.029 --rc genhtml_legend=1 00:09:41.029 --rc geninfo_all_blocks=1 00:09:41.029 --rc geninfo_unexecuted_blocks=1 00:09:41.029 00:09:41.029 ' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.029 --rc genhtml_branch_coverage=1 00:09:41.029 --rc genhtml_function_coverage=1 00:09:41.029 --rc genhtml_legend=1 00:09:41.029 --rc geninfo_all_blocks=1 00:09:41.029 --rc geninfo_unexecuted_blocks=1 00:09:41.029 00:09:41.029 ' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:41.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.029 --rc genhtml_branch_coverage=1 00:09:41.029 --rc genhtml_function_coverage=1 00:09:41.029 --rc genhtml_legend=1 00:09:41.029 --rc geninfo_all_blocks=1 00:09:41.029 --rc geninfo_unexecuted_blocks=1 00:09:41.029 00:09:41.029 ' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.029 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:47.598 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:47.598 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.598 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:47.599 Found net devices under 0000:af:00.0: cvl_0_0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:47.599 Found net devices under 0000:af:00.1: cvl_0_1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:09:47.599 00:09:47.599 --- 10.0.0.2 ping statistics --- 00:09:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.599 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:47.599 00:09:47.599 --- 10.0.0.1 ping statistics --- 00:09:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.599 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3847 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3847 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3847 ']' 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.599 [2024-11-06 12:16:18.442506] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:09:47.599 [2024-11-06 12:16:18.442563] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.599 [2024-11-06 12:16:18.540960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.599 [2024-11-06 12:16:18.588095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.599 [2024-11-06 12:16:18.588136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.599 [2024-11-06 12:16:18.588147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.599 [2024-11-06 12:16:18.588156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.599 [2024-11-06 12:16:18.588164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.599 [2024-11-06 12:16:18.589643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.599 [2024-11-06 12:16:18.589650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.599 [2024-11-06 12:16:18.731439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.599 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.600 [2024-11-06 12:16:18.747680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.600 NULL1 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.600 Delay0 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3948 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:47.600 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:47.600 [2024-11-06 12:16:18.832425] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:49.505 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.505 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.505 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Write completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 starting I/O failed: -6 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.505 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 [2024-11-06 12:16:20.914476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4b800d020 is same with the state(6) to be set 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 Write completed with error (sct=0, sc=8) 00:09:49.506 starting I/O failed: -6 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:49.506 Read completed with error (sct=0, sc=8) 00:09:50.442 [2024-11-06 12:16:21.885793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee5e0 is same with the state(6) to be set 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 [2024-11-06 12:16:21.917032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed4a0 is same with the state(6) to be set 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 [2024-11-06 12:16:21.917287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ecf00 is same with the state(6) to be set 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 [2024-11-06 12:16:21.917422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed0e0 is same with the state(6) to be set 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Read completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.442 Write completed with error (sct=0, sc=8) 00:09:50.443 Read completed with error (sct=0, sc=8) 00:09:50.443 Read completed with error (sct=0, sc=8) 00:09:50.443 Write completed with error (sct=0, sc=8) 00:09:50.443 Read completed with error (sct=0, sc=8) 00:09:50.443 Read completed with error (sct=0, sc=8) 00:09:50.443 Write completed with error (sct=0, sc=8) 00:09:50.443 [2024-11-06 12:16:21.918604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4b800d350 is same with the state(6) to be set 00:09:50.443 Initializing NVMe Controllers 00:09:50.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:50.443 Controller IO queue size 128, less than required. 00:09:50.443 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:50.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:50.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:50.443 Initialization complete. Launching workers. 00:09:50.443 ======================================================== 00:09:50.443 Latency(us) 00:09:50.443 Device Information : IOPS MiB/s Average min max 00:09:50.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.10 0.09 999622.77 381.17 2001828.20 00:09:50.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.30 0.07 908353.25 281.60 2002632.91 00:09:50.443 ======================================================== 00:09:50.443 Total : 329.39 0.16 957424.36 281.60 2002632.91 00:09:50.443 00:09:50.443 [2024-11-06 12:16:21.918934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee5e0 (9): Bad file descriptor 00:09:50.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:50.443 12:16:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.443 12:16:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:50.443 12:16:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3948 00:09:50.443 12:16:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3948 00:09:51.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3948) - No such process 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3948 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3948 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3948 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.010 [2024-11-06 12:16:22.447094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4512 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:51.010 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:51.010 [2024-11-06 12:16:22.515799] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:51.577 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:51.577 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:51.577 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:52.144 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:52.144 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:52.144 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:52.403 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:52.403 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:52.403 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:52.970 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:52.970 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:52.970 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:53.541 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:53.541 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:53.541 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:54.108 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:54.108 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:54.108 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:54.367 Initializing NVMe Controllers 00:09:54.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.367 Controller IO queue size 128, less than required. 00:09:54.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:54.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:54.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:54.367 Initialization complete. Launching workers. 00:09:54.367 ======================================================== 00:09:54.367 Latency(us) 00:09:54.367 Device Information : IOPS MiB/s Average min max 00:09:54.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004648.58 1000197.31 1041162.87 00:09:54.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004073.20 1000178.53 1041186.44 00:09:54.367 ======================================================== 00:09:54.367 Total : 256.00 0.12 1004360.89 1000178.53 1041186.44 00:09:54.367 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4512 00:09:54.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4512) - No such process 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4512 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.627 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.627 rmmod nvme_tcp 00:09:54.627 rmmod nvme_fabrics 00:09:54.627 rmmod nvme_keyring 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3847 ']' 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3847 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3847 ']' 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3847 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3847 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3847' 00:09:54.627 killing process with pid 3847 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3847 00:09:54.627 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3847 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.886 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.791 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.791 00:09:56.791 real 0m16.151s 00:09:56.791 user 0m29.196s 00:09:56.791 sys 0m5.469s 00:09:56.791 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.791 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.791 ************************************ 00:09:56.791 END TEST nvmf_delete_subsystem 00:09:56.791 ************************************ 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.051 ************************************ 00:09:57.051 START TEST nvmf_host_management 00:09:57.051 ************************************ 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:57.051 * Looking for test storage... 00:09:57.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.051 --rc genhtml_branch_coverage=1 00:09:57.051 --rc genhtml_function_coverage=1 00:09:57.051 --rc genhtml_legend=1 00:09:57.051 --rc geninfo_all_blocks=1 00:09:57.051 --rc geninfo_unexecuted_blocks=1 00:09:57.051 00:09:57.051 ' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.051 --rc genhtml_branch_coverage=1 00:09:57.051 --rc genhtml_function_coverage=1 00:09:57.051 --rc genhtml_legend=1 00:09:57.051 --rc geninfo_all_blocks=1 00:09:57.051 --rc geninfo_unexecuted_blocks=1 00:09:57.051 00:09:57.051 ' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.051 --rc genhtml_branch_coverage=1 00:09:57.051 --rc genhtml_function_coverage=1 00:09:57.051 --rc genhtml_legend=1 00:09:57.051 --rc geninfo_all_blocks=1 00:09:57.051 --rc geninfo_unexecuted_blocks=1 00:09:57.051 00:09:57.051 ' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.051 --rc genhtml_branch_coverage=1 00:09:57.051 --rc genhtml_function_coverage=1 00:09:57.051 --rc genhtml_legend=1 00:09:57.051 --rc geninfo_all_blocks=1 00:09:57.051 --rc geninfo_unexecuted_blocks=1 00:09:57.051 00:09:57.051 ' 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.051 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.310 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.578 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:02.579 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:02.579 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:02.579 Found net devices under 0000:af:00.0: cvl_0_0 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:02.579 Found net devices under 0000:af:00.1: cvl_0_1 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.579 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:10:02.838 00:10:02.838 --- 10.0.0.2 ping statistics --- 00:10:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.838 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:10:02.838 00:10:02.838 --- 10.0.0.1 ping statistics --- 00:10:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.838 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.838 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=8999 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 8999 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 8999 ']' 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.097 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.097 [2024-11-06 12:16:34.530520] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:03.097 [2024-11-06 12:16:34.530582] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.097 [2024-11-06 12:16:34.602915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.097 [2024-11-06 12:16:34.641477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.097 [2024-11-06 12:16:34.641514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.097 [2024-11-06 12:16:34.641522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.097 [2024-11-06 12:16:34.641527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.097 [2024-11-06 12:16:34.641532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.097 [2024-11-06 12:16:34.643155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.097 [2024-11-06 12:16:34.643256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.097 [2024-11-06 12:16:34.643334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.097 [2024-11-06 12:16:34.643336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.356 [2024-11-06 12:16:34.805270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:03.356 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 Malloc0 00:10:03.357 [2024-11-06 12:16:34.881163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=9065 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 9065 /var/tmp/bdevperf.sock 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 9065 ']' 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.357 { 00:10:03.357 "params": { 00:10:03.357 "name": "Nvme$subsystem", 00:10:03.357 "trtype": "$TEST_TRANSPORT", 00:10:03.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.357 "adrfam": "ipv4", 00:10:03.357 "trsvcid": "$NVMF_PORT", 00:10:03.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.357 "hdgst": ${hdgst:-false}, 00:10:03.357 "ddgst": ${ddgst:-false} 00:10:03.357 }, 00:10:03.357 "method": "bdev_nvme_attach_controller" 00:10:03.357 } 00:10:03.357 EOF 00:10:03.357 )") 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:03.357 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.357 "params": { 00:10:03.357 "name": "Nvme0", 00:10:03.357 "trtype": "tcp", 00:10:03.357 "traddr": "10.0.0.2", 00:10:03.357 "adrfam": "ipv4", 00:10:03.357 "trsvcid": "4420", 00:10:03.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:03.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:03.357 "hdgst": false, 00:10:03.357 "ddgst": false 00:10:03.357 }, 00:10:03.357 "method": "bdev_nvme_attach_controller" 00:10:03.357 }' 00:10:03.616 [2024-11-06 12:16:34.982262] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:03.616 [2024-11-06 12:16:34.982320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9065 ] 00:10:03.616 [2024-11-06 12:16:35.076804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.616 [2024-11-06 12:16:35.125204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.874 Running I/O for 10 seconds... 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:04.133 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:10:04.134 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.393 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 [2024-11-06 12:16:35.889510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.393 [2024-11-06 12:16:35.889557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.393 [2024-11-06 12:16:35.889572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.393 [2024-11-06 12:16:35.889582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.393 [2024-11-06 12:16:35.889594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.393 [2024-11-06 12:16:35.889604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.889615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.394 [2024-11-06 12:16:35.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.889635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cfa40 is same with the state(6) to be set 00:10:04.394 [2024-11-06 12:16:35.889946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.889962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.889981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.889992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.394 [2024-11-06 12:16:35.890819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.394 [2024-11-06 12:16:35.890831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.395 [2024-11-06 12:16:35.891442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.395 [2024-11-06 12:16:35.891453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e8cd0 is same with the state(6) to be set 00:10:04.395 [2024-11-06 12:16:35.892913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.395 task offset: 89984 on job bdev=Nvme0n1 fails 00:10:04.395 00:10:04.395 Latency(us) 00:10:04.395 [2024-11-06T11:16:36.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.395 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:04.395 Job: Nvme0n1 ended in about 0.43 seconds with error 00:10:04.395 Verification LBA range: start 0x0 length 0x400 00:10:04.395 Nvme0n1 : 0.43 1484.98 92.81 148.50 0.00 37737.53 4915.20 34555.35 00:10:04.395 [2024-11-06T11:16:36.010Z] =================================================================================================================== 00:10:04.395 [2024-11-06T11:16:36.010Z] Total : 1484.98 92.81 148.50 0.00 37737.53 4915.20 34555.35 00:10:04.395 [2024-11-06 12:16:35.896063] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.395 [2024-11-06 12:16:35.896093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cfa40 (9): Bad file descriptor 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.395 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:04.395 [2024-11-06 12:16:35.906728] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 9065 00:10:05.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (9065) - No such process 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:05.331 { 00:10:05.331 "params": { 00:10:05.331 "name": "Nvme$subsystem", 00:10:05.331 "trtype": "$TEST_TRANSPORT", 00:10:05.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.331 "adrfam": "ipv4", 00:10:05.331 "trsvcid": "$NVMF_PORT", 00:10:05.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.331 "hdgst": ${hdgst:-false}, 00:10:05.331 "ddgst": ${ddgst:-false} 00:10:05.331 }, 00:10:05.331 "method": "bdev_nvme_attach_controller" 00:10:05.331 } 00:10:05.331 EOF 00:10:05.331 )") 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:05.331 12:16:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:05.331 "params": { 00:10:05.331 "name": "Nvme0", 00:10:05.331 "trtype": "tcp", 00:10:05.331 "traddr": "10.0.0.2", 00:10:05.331 "adrfam": "ipv4", 00:10:05.331 "trsvcid": "4420", 00:10:05.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:05.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:05.331 "hdgst": false, 00:10:05.331 "ddgst": false 00:10:05.331 }, 00:10:05.331 "method": "bdev_nvme_attach_controller" 00:10:05.331 }' 00:10:05.590 [2024-11-06 12:16:36.966996] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:05.590 [2024-11-06 12:16:36.967058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9357 ] 00:10:05.590 [2024-11-06 12:16:37.063368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.590 [2024-11-06 12:16:37.109719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.849 Running I/O for 1 seconds... 00:10:06.785 1600.00 IOPS, 100.00 MiB/s 00:10:06.785 Latency(us) 00:10:06.785 [2024-11-06T11:16:38.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.785 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:06.785 Verification LBA range: start 0x0 length 0x400 00:10:06.785 Nvme0n1 : 1.04 1607.34 100.46 0.00 0.00 38978.46 6196.13 34078.72 00:10:06.785 [2024-11-06T11:16:38.400Z] =================================================================================================================== 00:10:06.785 [2024-11-06T11:16:38.400Z] Total : 1607.34 100.46 0.00 0.00 38978.46 6196.13 34078.72 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.044 rmmod nvme_tcp 00:10:07.044 rmmod nvme_fabrics 00:10:07.044 rmmod nvme_keyring 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 8999 ']' 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 8999 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 8999 ']' 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 8999 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 8999 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 8999' 00:10:07.044 killing process with pid 8999 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 8999 00:10:07.044 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 8999 00:10:07.303 [2024-11-06 12:16:38.788291] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.303 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.837 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:09.838 00:10:09.838 real 0m12.437s 00:10:09.838 user 0m20.851s 00:10:09.838 sys 0m5.484s 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.838 ************************************ 00:10:09.838 END TEST nvmf_host_management 00:10:09.838 ************************************ 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.838 ************************************ 00:10:09.838 START TEST nvmf_lvol 00:10:09.838 ************************************ 00:10:09.838 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.838 * Looking for test storage... 00:10:09.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:09.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.838 --rc genhtml_branch_coverage=1 00:10:09.838 --rc genhtml_function_coverage=1 00:10:09.838 --rc genhtml_legend=1 00:10:09.838 --rc geninfo_all_blocks=1 00:10:09.838 --rc geninfo_unexecuted_blocks=1 00:10:09.838 00:10:09.838 ' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:09.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.838 --rc genhtml_branch_coverage=1 00:10:09.838 --rc genhtml_function_coverage=1 00:10:09.838 --rc genhtml_legend=1 00:10:09.838 --rc geninfo_all_blocks=1 00:10:09.838 --rc geninfo_unexecuted_blocks=1 00:10:09.838 00:10:09.838 ' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:09.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.838 --rc genhtml_branch_coverage=1 00:10:09.838 --rc genhtml_function_coverage=1 00:10:09.838 --rc genhtml_legend=1 00:10:09.838 --rc geninfo_all_blocks=1 00:10:09.838 --rc geninfo_unexecuted_blocks=1 00:10:09.838 00:10:09.838 ' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:09.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.838 --rc genhtml_branch_coverage=1 00:10:09.838 --rc genhtml_function_coverage=1 00:10:09.838 --rc genhtml_legend=1 00:10:09.838 --rc geninfo_all_blocks=1 00:10:09.838 --rc geninfo_unexecuted_blocks=1 00:10:09.838 00:10:09.838 ' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.838 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.839 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.111 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:15.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:15.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:15.112 Found net devices under 0000:af:00.0: cvl_0_0 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:15.112 Found net devices under 0000:af:00.1: cvl_0_1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:10:15.112 00:10:15.112 --- 10.0.0.2 ping statistics --- 00:10:15.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.112 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:10:15.112 00:10:15.112 --- 10.0.0.1 ping statistics --- 00:10:15.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.112 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=13352 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 13352 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 13352 ']' 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.112 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:15.113 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:15.113 [2024-11-06 12:16:46.592116] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:15.113 [2024-11-06 12:16:46.592171] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.113 [2024-11-06 12:16:46.691177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.371 [2024-11-06 12:16:46.741168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.371 [2024-11-06 12:16:46.741205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.371 [2024-11-06 12:16:46.741216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.371 [2024-11-06 12:16:46.741228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.371 [2024-11-06 12:16:46.741236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.371 [2024-11-06 12:16:46.742848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.371 [2024-11-06 12:16:46.742966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.371 [2024-11-06 12:16:46.742970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.371 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:15.630 [2024-11-06 12:16:47.033654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.630 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.888 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:15.888 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.147 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:16.147 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:16.405 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:16.663 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ef21d148-f6ea-4d1c-8d14-00d80639abfb 00:10:16.664 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef21d148-f6ea-4d1c-8d14-00d80639abfb lvol 20 00:10:16.922 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=023e00de-0d63-4a59-bcd4-649da0843673 00:10:16.922 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:17.180 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 023e00de-0d63-4a59-bcd4-649da0843673 00:10:17.439 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:17.697 [2024-11-06 12:16:49.185686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.697 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.955 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=13911 00:10:17.955 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:17.955 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:18.890 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 023e00de-0d63-4a59-bcd4-649da0843673 MY_SNAPSHOT 00:10:19.457 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6c1365aa-45e2-4457-95a9-5b63a5fed370 00:10:19.457 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 023e00de-0d63-4a59-bcd4-649da0843673 30 00:10:19.715 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6c1365aa-45e2-4457-95a9-5b63a5fed370 MY_CLONE 00:10:19.973 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=748bf226-f6d6-45c4-826a-2d5a7d2d1a79 00:10:19.973 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 748bf226-f6d6-45c4-826a-2d5a7d2d1a79 00:10:20.907 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 13911 00:10:29.024 Initializing NVMe Controllers 00:10:29.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:29.024 Controller IO queue size 128, less than required. 00:10:29.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:29.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:29.024 Initialization complete. Launching workers. 00:10:29.024 ======================================================== 00:10:29.024 Latency(us) 00:10:29.024 Device Information : IOPS MiB/s Average min max 00:10:29.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13054.20 50.99 9806.39 607.90 65977.95 00:10:29.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8440.40 32.97 15170.73 1669.80 81971.41 00:10:29.024 ======================================================== 00:10:29.024 Total : 21494.60 83.96 11912.83 607.90 81971.41 00:10:29.024 00:10:29.024 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 023e00de-0d63-4a59-bcd4-649da0843673 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef21d148-f6ea-4d1c-8d14-00d80639abfb 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.024 rmmod nvme_tcp 00:10:29.024 rmmod nvme_fabrics 00:10:29.024 rmmod nvme_keyring 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 13352 ']' 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 13352 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 13352 ']' 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 13352 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.024 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 13352 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 13352' 00:10:29.283 killing process with pid 13352 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 13352 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 13352 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.283 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.818 00:10:31.818 real 0m21.999s 00:10:31.818 user 1m5.678s 00:10:31.818 sys 0m7.110s 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:31.818 ************************************ 00:10:31.818 END TEST nvmf_lvol 00:10:31.818 ************************************ 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.818 12:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.818 ************************************ 00:10:31.818 START TEST nvmf_lvs_grow 00:10:31.818 ************************************ 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:31.818 * Looking for test storage... 00:10:31.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.818 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:31.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.819 --rc genhtml_branch_coverage=1 00:10:31.819 --rc genhtml_function_coverage=1 00:10:31.819 --rc genhtml_legend=1 00:10:31.819 --rc geninfo_all_blocks=1 00:10:31.819 --rc geninfo_unexecuted_blocks=1 00:10:31.819 00:10:31.819 ' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:31.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.819 --rc genhtml_branch_coverage=1 00:10:31.819 --rc genhtml_function_coverage=1 00:10:31.819 --rc genhtml_legend=1 00:10:31.819 --rc geninfo_all_blocks=1 00:10:31.819 --rc geninfo_unexecuted_blocks=1 00:10:31.819 00:10:31.819 ' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:31.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.819 --rc genhtml_branch_coverage=1 00:10:31.819 --rc genhtml_function_coverage=1 00:10:31.819 --rc genhtml_legend=1 00:10:31.819 --rc geninfo_all_blocks=1 00:10:31.819 --rc geninfo_unexecuted_blocks=1 00:10:31.819 00:10:31.819 ' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:31.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.819 --rc genhtml_branch_coverage=1 00:10:31.819 --rc genhtml_function_coverage=1 00:10:31.819 --rc genhtml_legend=1 00:10:31.819 --rc geninfo_all_blocks=1 00:10:31.819 --rc geninfo_unexecuted_blocks=1 00:10:31.819 00:10:31.819 ' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.819 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:37.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:37.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:37.149 Found net devices under 0000:af:00.0: cvl_0_0 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:37.149 Found net devices under 0000:af:00.1: cvl_0_1 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.149 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:10:37.150 00:10:37.150 --- 10.0.0.2 ping statistics --- 00:10:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.150 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:10:37.150 00:10:37.150 --- 10.0.0.1 ping statistics --- 00:10:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.150 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=19497 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 19497 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 19497 ']' 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:37.150 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.409 [2024-11-06 12:17:08.817151] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:37.409 [2024-11-06 12:17:08.817207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.409 [2024-11-06 12:17:08.918340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.409 [2024-11-06 12:17:08.968129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.409 [2024-11-06 12:17:08.968164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.409 [2024-11-06 12:17:08.968175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.409 [2024-11-06 12:17:08.968184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.410 [2024-11-06 12:17:08.968191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.410 [2024-11-06 12:17:08.968899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.347 [2024-11-06 12:17:09.901971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.347 ************************************ 00:10:38.347 START TEST lvs_grow_clean 00:10:38.347 ************************************ 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:38.347 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:38.915 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:38.915 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:39.173 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=647eca65-8596-45d9-a9e6-a40294c9140f 00:10:39.173 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:39.173 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:39.432 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:39.432 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:39.432 12:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 647eca65-8596-45d9-a9e6-a40294c9140f lvol 150 00:10:39.691 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=23f4adc6-97cc-40f7-99dd-183679d3925b 00:10:39.692 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:39.692 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:39.951 [2024-11-06 12:17:11.334392] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:39.951 [2024-11-06 12:17:11.334456] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:39.951 true 00:10:39.951 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:39.951 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:40.209 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:40.209 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:40.467 12:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23f4adc6-97cc-40f7-99dd-183679d3925b 00:10:40.725 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:40.984 [2024-11-06 12:17:12.397670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.984 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=20327 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 20327 /var/tmp/bdevperf.sock 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 20327 ']' 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:41.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.243 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:41.243 [2024-11-06 12:17:12.737823] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:41.243 [2024-11-06 12:17:12.737882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20327 ] 00:10:41.243 [2024-11-06 12:17:12.803413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.243 [2024-11-06 12:17:12.840351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.502 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.502 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:41.502 12:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:41.760 Nvme0n1 00:10:41.760 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:42.019 [ 00:10:42.019 { 00:10:42.019 "name": "Nvme0n1", 00:10:42.019 "aliases": [ 00:10:42.019 "23f4adc6-97cc-40f7-99dd-183679d3925b" 00:10:42.019 ], 00:10:42.019 "product_name": "NVMe disk", 00:10:42.019 "block_size": 4096, 00:10:42.019 "num_blocks": 38912, 00:10:42.019 "uuid": "23f4adc6-97cc-40f7-99dd-183679d3925b", 00:10:42.019 "numa_id": 1, 00:10:42.019 "assigned_rate_limits": { 00:10:42.019 "rw_ios_per_sec": 0, 00:10:42.019 "rw_mbytes_per_sec": 0, 00:10:42.019 "r_mbytes_per_sec": 0, 00:10:42.019 "w_mbytes_per_sec": 0 00:10:42.019 }, 00:10:42.019 "claimed": false, 00:10:42.019 "zoned": false, 00:10:42.019 "supported_io_types": { 00:10:42.019 "read": true, 00:10:42.019 "write": true, 00:10:42.019 "unmap": true, 00:10:42.019 "flush": true, 00:10:42.019 "reset": true, 00:10:42.019 "nvme_admin": true, 00:10:42.019 "nvme_io": true, 00:10:42.019 "nvme_io_md": false, 00:10:42.019 "write_zeroes": true, 00:10:42.019 "zcopy": false, 00:10:42.019 "get_zone_info": false, 00:10:42.019 "zone_management": false, 00:10:42.019 "zone_append": false, 00:10:42.019 "compare": true, 00:10:42.019 "compare_and_write": true, 00:10:42.019 "abort": true, 00:10:42.019 "seek_hole": false, 00:10:42.019 "seek_data": false, 00:10:42.019 "copy": true, 00:10:42.019 "nvme_iov_md": false 00:10:42.019 }, 00:10:42.019 "memory_domains": [ 00:10:42.019 { 00:10:42.019 "dma_device_id": "system", 00:10:42.019 "dma_device_type": 1 00:10:42.019 } 00:10:42.019 ], 00:10:42.019 "driver_specific": { 00:10:42.019 "nvme": [ 00:10:42.019 { 00:10:42.019 "trid": { 00:10:42.019 "trtype": "TCP", 00:10:42.019 "adrfam": "IPv4", 00:10:42.019 "traddr": "10.0.0.2", 00:10:42.019 "trsvcid": "4420", 00:10:42.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:42.019 }, 00:10:42.019 "ctrlr_data": { 00:10:42.019 "cntlid": 1, 00:10:42.019 "vendor_id": "0x8086", 00:10:42.019 "model_number": "SPDK bdev Controller", 00:10:42.019 "serial_number": "SPDK0", 00:10:42.019 "firmware_revision": "25.01", 00:10:42.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.019 "oacs": { 00:10:42.019 "security": 0, 00:10:42.019 "format": 0, 00:10:42.019 "firmware": 0, 00:10:42.019 "ns_manage": 0 00:10:42.019 }, 00:10:42.019 "multi_ctrlr": true, 00:10:42.019 "ana_reporting": false 00:10:42.019 }, 00:10:42.019 "vs": { 00:10:42.019 "nvme_version": "1.3" 00:10:42.019 }, 00:10:42.019 "ns_data": { 00:10:42.019 "id": 1, 00:10:42.019 "can_share": true 00:10:42.019 } 00:10:42.019 } 00:10:42.019 ], 00:10:42.019 "mp_policy": "active_passive" 00:10:42.019 } 00:10:42.019 } 00:10:42.019 ] 00:10:42.019 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=20548 00:10:42.019 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:42.019 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:42.277 Running I/O for 10 seconds... 00:10:43.212 Latency(us) 00:10:43.212 [2024-11-06T11:17:14.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.212 Nvme0n1 : 1.00 14751.00 57.62 0.00 0.00 0.00 0.00 0.00 00:10:43.212 [2024-11-06T11:17:14.827Z] =================================================================================================================== 00:10:43.212 [2024-11-06T11:17:14.827Z] Total : 14751.00 57.62 0.00 0.00 0.00 0.00 0.00 00:10:43.212 00:10:44.150 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:44.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.150 Nvme0n1 : 2.00 14843.00 57.98 0.00 0.00 0.00 0.00 0.00 00:10:44.150 [2024-11-06T11:17:15.765Z] =================================================================================================================== 00:10:44.150 [2024-11-06T11:17:15.765Z] Total : 14843.00 57.98 0.00 0.00 0.00 0.00 0.00 00:10:44.150 00:10:44.150 true 00:10:44.150 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:44.150 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:44.409 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:44.409 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:44.409 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 20548 00:10:45.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.345 Nvme0n1 : 3.00 14892.67 58.17 0.00 0.00 0.00 0.00 0.00 00:10:45.345 [2024-11-06T11:17:16.960Z] =================================================================================================================== 00:10:45.345 [2024-11-06T11:17:16.960Z] Total : 14892.67 58.17 0.00 0.00 0.00 0.00 0.00 00:10:45.345 00:10:46.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.356 Nvme0n1 : 4.00 14938.50 58.35 0.00 0.00 0.00 0.00 0.00 00:10:46.356 [2024-11-06T11:17:17.971Z] =================================================================================================================== 00:10:46.356 [2024-11-06T11:17:17.971Z] Total : 14938.50 58.35 0.00 0.00 0.00 0.00 0.00 00:10:46.356 00:10:47.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.329 Nvme0n1 : 5.00 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:10:47.329 [2024-11-06T11:17:18.944Z] =================================================================================================================== 00:10:47.329 [2024-11-06T11:17:18.944Z] Total : 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:10:47.329 00:10:48.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.263 Nvme0n1 : 6.00 14989.33 58.55 0.00 0.00 0.00 0.00 0.00 00:10:48.263 [2024-11-06T11:17:19.878Z] =================================================================================================================== 00:10:48.263 [2024-11-06T11:17:19.878Z] Total : 14989.33 58.55 0.00 0.00 0.00 0.00 0.00 00:10:48.263 00:10:49.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.197 Nvme0n1 : 7.00 14991.86 58.56 0.00 0.00 0.00 0.00 0.00 00:10:49.197 [2024-11-06T11:17:20.812Z] =================================================================================================================== 00:10:49.197 [2024-11-06T11:17:20.812Z] Total : 14991.86 58.56 0.00 0.00 0.00 0.00 0.00 00:10:49.197 00:10:50.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.133 Nvme0n1 : 8.00 15010.00 58.63 0.00 0.00 0.00 0.00 0.00 00:10:50.133 [2024-11-06T11:17:21.748Z] =================================================================================================================== 00:10:50.133 [2024-11-06T11:17:21.748Z] Total : 15010.00 58.63 0.00 0.00 0.00 0.00 0.00 00:10:50.133 00:10:51.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.068 Nvme0n1 : 9.00 15030.67 58.71 0.00 0.00 0.00 0.00 0.00 00:10:51.068 [2024-11-06T11:17:22.683Z] =================================================================================================================== 00:10:51.068 [2024-11-06T11:17:22.683Z] Total : 15030.67 58.71 0.00 0.00 0.00 0.00 0.00 00:10:51.068 00:10:52.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.443 Nvme0n1 : 10.00 15040.10 58.75 0.00 0.00 0.00 0.00 0.00 00:10:52.443 [2024-11-06T11:17:24.058Z] =================================================================================================================== 00:10:52.443 [2024-11-06T11:17:24.058Z] Total : 15040.10 58.75 0.00 0.00 0.00 0.00 0.00 00:10:52.443 00:10:52.443 00:10:52.443 Latency(us) 00:10:52.443 [2024-11-06T11:17:24.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.443 Nvme0n1 : 10.01 15039.35 58.75 0.00 0.00 8507.84 5302.46 16205.27 00:10:52.443 [2024-11-06T11:17:24.058Z] =================================================================================================================== 00:10:52.443 [2024-11-06T11:17:24.058Z] Total : 15039.35 58.75 0.00 0.00 8507.84 5302.46 16205.27 00:10:52.443 { 00:10:52.443 "results": [ 00:10:52.443 { 00:10:52.443 "job": "Nvme0n1", 00:10:52.443 "core_mask": "0x2", 00:10:52.443 "workload": "randwrite", 00:10:52.443 "status": "finished", 00:10:52.443 "queue_depth": 128, 00:10:52.443 "io_size": 4096, 00:10:52.443 "runtime": 10.00901, 00:10:52.443 "iops": 15039.349546059, 00:10:52.443 "mibps": 58.74745916429297, 00:10:52.443 "io_failed": 0, 00:10:52.443 "io_timeout": 0, 00:10:52.443 "avg_latency_us": 8507.837506007601, 00:10:52.443 "min_latency_us": 5302.458181818181, 00:10:52.443 "max_latency_us": 16205.265454545455 00:10:52.443 } 00:10:52.443 ], 00:10:52.443 "core_count": 1 00:10:52.443 } 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 20327 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 20327 ']' 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 20327 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 20327 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 20327' 00:10:52.443 killing process with pid 20327 00:10:52.443 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 20327 00:10:52.443 Received shutdown signal, test time was about 10.000000 seconds 00:10:52.443 00:10:52.443 Latency(us) 00:10:52.443 [2024-11-06T11:17:24.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.444 [2024-11-06T11:17:24.059Z] =================================================================================================================== 00:10:52.444 [2024-11-06T11:17:24.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:52.444 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 20327 00:10:52.444 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:52.702 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:52.961 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:52.961 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:53.219 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:53.219 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:53.219 12:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:53.478 [2024-11-06 12:17:24.998634] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:53.478 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:53.739 request: 00:10:53.739 { 00:10:53.739 "uuid": "647eca65-8596-45d9-a9e6-a40294c9140f", 00:10:53.739 "method": "bdev_lvol_get_lvstores", 00:10:53.739 "req_id": 1 00:10:53.739 } 00:10:53.739 Got JSON-RPC error response 00:10:53.739 response: 00:10:53.739 { 00:10:53.739 "code": -19, 00:10:53.739 "message": "No such device" 00:10:53.739 } 00:10:53.739 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:53.739 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:53.739 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:53.739 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:53.739 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:53.999 aio_bdev 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 23f4adc6-97cc-40f7-99dd-183679d3925b 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=23f4adc6-97cc-40f7-99dd-183679d3925b 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:53.999 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:54.567 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 23f4adc6-97cc-40f7-99dd-183679d3925b -t 2000 00:10:54.567 [ 00:10:54.568 { 00:10:54.568 "name": "23f4adc6-97cc-40f7-99dd-183679d3925b", 00:10:54.568 "aliases": [ 00:10:54.568 "lvs/lvol" 00:10:54.568 ], 00:10:54.568 "product_name": "Logical Volume", 00:10:54.568 "block_size": 4096, 00:10:54.568 "num_blocks": 38912, 00:10:54.568 "uuid": "23f4adc6-97cc-40f7-99dd-183679d3925b", 00:10:54.568 "assigned_rate_limits": { 00:10:54.568 "rw_ios_per_sec": 0, 00:10:54.568 "rw_mbytes_per_sec": 0, 00:10:54.568 "r_mbytes_per_sec": 0, 00:10:54.568 "w_mbytes_per_sec": 0 00:10:54.568 }, 00:10:54.568 "claimed": false, 00:10:54.568 "zoned": false, 00:10:54.568 "supported_io_types": { 00:10:54.568 "read": true, 00:10:54.568 "write": true, 00:10:54.568 "unmap": true, 00:10:54.568 "flush": false, 00:10:54.568 "reset": true, 00:10:54.568 "nvme_admin": false, 00:10:54.568 "nvme_io": false, 00:10:54.568 "nvme_io_md": false, 00:10:54.568 "write_zeroes": true, 00:10:54.568 "zcopy": false, 00:10:54.568 "get_zone_info": false, 00:10:54.568 "zone_management": false, 00:10:54.568 "zone_append": false, 00:10:54.568 "compare": false, 00:10:54.568 "compare_and_write": false, 00:10:54.568 "abort": false, 00:10:54.568 "seek_hole": true, 00:10:54.568 "seek_data": true, 00:10:54.568 "copy": false, 00:10:54.568 "nvme_iov_md": false 00:10:54.568 }, 00:10:54.568 "driver_specific": { 00:10:54.568 "lvol": { 00:10:54.568 "lvol_store_uuid": "647eca65-8596-45d9-a9e6-a40294c9140f", 00:10:54.568 "base_bdev": "aio_bdev", 00:10:54.568 "thin_provision": false, 00:10:54.568 "num_allocated_clusters": 38, 00:10:54.568 "snapshot": false, 00:10:54.568 "clone": false, 00:10:54.568 "esnap_clone": false 00:10:54.568 } 00:10:54.568 } 00:10:54.568 } 00:10:54.568 ] 00:10:54.568 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:10:54.568 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:54.568 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:54.826 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:54.826 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:54.826 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:55.394 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:55.394 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23f4adc6-97cc-40f7-99dd-183679d3925b 00:10:55.394 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 647eca65-8596-45d9-a9e6-a40294c9140f 00:10:55.962 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:55.962 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:56.220 00:10:56.220 real 0m17.630s 00:10:56.220 user 0m17.352s 00:10:56.220 sys 0m1.571s 00:10:56.220 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.220 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:56.220 ************************************ 00:10:56.221 END TEST lvs_grow_clean 00:10:56.221 ************************************ 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:56.221 ************************************ 00:10:56.221 START TEST lvs_grow_dirty 00:10:56.221 ************************************ 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:56.221 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:56.479 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:56.479 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:56.738 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=84b81353-c328-41ad-851e-5deaee20b61e 00:10:56.738 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:10:56.738 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:56.997 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:56.997 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:56.997 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84b81353-c328-41ad-851e-5deaee20b61e lvol 150 00:10:57.256 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cd4dd34c-d2dc-479b-94a3-961e1502484d 00:10:57.256 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:57.256 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:57.515 [2024-11-06 12:17:28.937091] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:57.515 [2024-11-06 12:17:28.937153] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:57.515 true 00:10:57.515 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:10:57.515 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:57.774 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:57.774 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:58.033 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cd4dd34c-d2dc-479b-94a3-961e1502484d 00:10:58.291 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:58.549 [2024-11-06 12:17:30.020401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.549 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=23551 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 23551 /var/tmp/bdevperf.sock 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 23551 ']' 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:58.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.808 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:58.808 [2024-11-06 12:17:30.374587] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:10:58.808 [2024-11-06 12:17:30.374635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23551 ] 00:10:59.066 [2024-11-06 12:17:30.428237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.066 [2024-11-06 12:17:30.469634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.066 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.066 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:59.066 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:59.633 Nvme0n1 00:10:59.633 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:59.633 [ 00:10:59.633 { 00:10:59.633 "name": "Nvme0n1", 00:10:59.633 "aliases": [ 00:10:59.633 "cd4dd34c-d2dc-479b-94a3-961e1502484d" 00:10:59.633 ], 00:10:59.633 "product_name": "NVMe disk", 00:10:59.633 "block_size": 4096, 00:10:59.633 "num_blocks": 38912, 00:10:59.633 "uuid": "cd4dd34c-d2dc-479b-94a3-961e1502484d", 00:10:59.633 "numa_id": 1, 00:10:59.633 "assigned_rate_limits": { 00:10:59.633 "rw_ios_per_sec": 0, 00:10:59.633 "rw_mbytes_per_sec": 0, 00:10:59.633 "r_mbytes_per_sec": 0, 00:10:59.633 "w_mbytes_per_sec": 0 00:10:59.633 }, 00:10:59.633 "claimed": false, 00:10:59.633 "zoned": false, 00:10:59.633 "supported_io_types": { 00:10:59.633 "read": true, 00:10:59.633 "write": true, 00:10:59.633 "unmap": true, 00:10:59.633 "flush": true, 00:10:59.633 "reset": true, 00:10:59.633 "nvme_admin": true, 00:10:59.633 "nvme_io": true, 00:10:59.633 "nvme_io_md": false, 00:10:59.633 "write_zeroes": true, 00:10:59.633 "zcopy": false, 00:10:59.633 "get_zone_info": false, 00:10:59.633 "zone_management": false, 00:10:59.633 "zone_append": false, 00:10:59.633 "compare": true, 00:10:59.633 "compare_and_write": true, 00:10:59.633 "abort": true, 00:10:59.633 "seek_hole": false, 00:10:59.633 "seek_data": false, 00:10:59.633 "copy": true, 00:10:59.633 "nvme_iov_md": false 00:10:59.633 }, 00:10:59.633 "memory_domains": [ 00:10:59.633 { 00:10:59.633 "dma_device_id": "system", 00:10:59.633 "dma_device_type": 1 00:10:59.633 } 00:10:59.633 ], 00:10:59.633 "driver_specific": { 00:10:59.633 "nvme": [ 00:10:59.633 { 00:10:59.633 "trid": { 00:10:59.634 "trtype": "TCP", 00:10:59.634 "adrfam": "IPv4", 00:10:59.634 "traddr": "10.0.0.2", 00:10:59.634 "trsvcid": "4420", 00:10:59.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:59.634 }, 00:10:59.634 "ctrlr_data": { 00:10:59.634 "cntlid": 1, 00:10:59.634 "vendor_id": "0x8086", 00:10:59.634 "model_number": "SPDK bdev Controller", 00:10:59.634 "serial_number": "SPDK0", 00:10:59.634 "firmware_revision": "25.01", 00:10:59.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:59.634 "oacs": { 00:10:59.634 "security": 0, 00:10:59.634 "format": 0, 00:10:59.634 "firmware": 0, 00:10:59.634 "ns_manage": 0 00:10:59.634 }, 00:10:59.634 "multi_ctrlr": true, 00:10:59.634 "ana_reporting": false 00:10:59.634 }, 00:10:59.634 "vs": { 00:10:59.634 "nvme_version": "1.3" 00:10:59.634 }, 00:10:59.634 "ns_data": { 00:10:59.634 "id": 1, 00:10:59.634 "can_share": true 00:10:59.634 } 00:10:59.634 } 00:10:59.634 ], 00:10:59.634 "mp_policy": "active_passive" 00:10:59.634 } 00:10:59.634 } 00:10:59.634 ] 00:10:59.634 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=23813 00:10:59.634 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:59.634 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:59.892 Running I/O for 10 seconds... 00:11:00.828 Latency(us) 00:11:00.828 [2024-11-06T11:17:32.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.828 Nvme0n1 : 1.00 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:11:00.828 [2024-11-06T11:17:32.443Z] =================================================================================================================== 00:11:00.828 [2024-11-06T11:17:32.443Z] Total : 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:11:00.828 00:11:01.763 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:01.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.763 Nvme0n1 : 2.00 14861.00 58.05 0.00 0.00 0.00 0.00 0.00 00:11:01.763 [2024-11-06T11:17:33.378Z] =================================================================================================================== 00:11:01.763 [2024-11-06T11:17:33.378Z] Total : 14861.00 58.05 0.00 0.00 0.00 0.00 0.00 00:11:01.763 00:11:02.021 true 00:11:02.021 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:02.021 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:02.280 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:02.280 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:02.280 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 23813 00:11:02.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.846 Nvme0n1 : 3.00 14902.67 58.21 0.00 0.00 0.00 0.00 0.00 00:11:02.846 [2024-11-06T11:17:34.461Z] =================================================================================================================== 00:11:02.846 [2024-11-06T11:17:34.461Z] Total : 14902.67 58.21 0.00 0.00 0.00 0.00 0.00 00:11:02.846 00:11:03.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.779 Nvme0n1 : 4.00 14941.00 58.36 0.00 0.00 0.00 0.00 0.00 00:11:03.779 [2024-11-06T11:17:35.394Z] =================================================================================================================== 00:11:03.779 [2024-11-06T11:17:35.394Z] Total : 14941.00 58.36 0.00 0.00 0.00 0.00 0.00 00:11:03.779 00:11:05.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.156 Nvme0n1 : 5.00 14963.80 58.45 0.00 0.00 0.00 0.00 0.00 00:11:05.156 [2024-11-06T11:17:36.771Z] =================================================================================================================== 00:11:05.156 [2024-11-06T11:17:36.771Z] Total : 14963.80 58.45 0.00 0.00 0.00 0.00 0.00 00:11:05.156 00:11:06.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.091 Nvme0n1 : 6.00 14988.67 58.55 0.00 0.00 0.00 0.00 0.00 00:11:06.091 [2024-11-06T11:17:37.706Z] =================================================================================================================== 00:11:06.091 [2024-11-06T11:17:37.706Z] Total : 14988.67 58.55 0.00 0.00 0.00 0.00 0.00 00:11:06.091 00:11:07.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.026 Nvme0n1 : 7.00 15006.43 58.62 0.00 0.00 0.00 0.00 0.00 00:11:07.026 [2024-11-06T11:17:38.641Z] =================================================================================================================== 00:11:07.026 [2024-11-06T11:17:38.641Z] Total : 15006.43 58.62 0.00 0.00 0.00 0.00 0.00 00:11:07.026 00:11:07.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.963 Nvme0n1 : 8.00 15019.75 58.67 0.00 0.00 0.00 0.00 0.00 00:11:07.963 [2024-11-06T11:17:39.578Z] =================================================================================================================== 00:11:07.963 [2024-11-06T11:17:39.578Z] Total : 15019.75 58.67 0.00 0.00 0.00 0.00 0.00 00:11:07.963 00:11:08.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.899 Nvme0n1 : 9.00 15030.11 58.71 0.00 0.00 0.00 0.00 0.00 00:11:08.899 [2024-11-06T11:17:40.514Z] =================================================================================================================== 00:11:08.899 [2024-11-06T11:17:40.514Z] Total : 15030.11 58.71 0.00 0.00 0.00 0.00 0.00 00:11:08.899 00:11:09.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.834 Nvme0n1 : 10.00 15044.80 58.77 0.00 0.00 0.00 0.00 0.00 00:11:09.834 [2024-11-06T11:17:41.449Z] =================================================================================================================== 00:11:09.834 [2024-11-06T11:17:41.449Z] Total : 15044.80 58.77 0.00 0.00 0.00 0.00 0.00 00:11:09.834 00:11:09.834 00:11:09.834 Latency(us) 00:11:09.834 [2024-11-06T11:17:41.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.834 Nvme0n1 : 10.01 15048.76 58.78 0.00 0.00 8501.24 5302.46 18826.71 00:11:09.834 [2024-11-06T11:17:41.449Z] =================================================================================================================== 00:11:09.834 [2024-11-06T11:17:41.449Z] Total : 15048.76 58.78 0.00 0.00 8501.24 5302.46 18826.71 00:11:09.834 { 00:11:09.834 "results": [ 00:11:09.834 { 00:11:09.834 "job": "Nvme0n1", 00:11:09.834 "core_mask": "0x2", 00:11:09.834 "workload": "randwrite", 00:11:09.834 "status": "finished", 00:11:09.834 "queue_depth": 128, 00:11:09.834 "io_size": 4096, 00:11:09.834 "runtime": 10.005876, 00:11:09.834 "iops": 15048.757350181033, 00:11:09.834 "mibps": 58.78420839914466, 00:11:09.834 "io_failed": 0, 00:11:09.834 "io_timeout": 0, 00:11:09.834 "avg_latency_us": 8501.241253851875, 00:11:09.834 "min_latency_us": 5302.458181818181, 00:11:09.834 "max_latency_us": 18826.705454545456 00:11:09.834 } 00:11:09.834 ], 00:11:09.834 "core_count": 1 00:11:09.834 } 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 23551 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 23551 ']' 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 23551 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.834 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 23551 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 23551' 00:11:10.093 killing process with pid 23551 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 23551 00:11:10.093 Received shutdown signal, test time was about 10.000000 seconds 00:11:10.093 00:11:10.093 Latency(us) 00:11:10.093 [2024-11-06T11:17:41.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.093 [2024-11-06T11:17:41.708Z] =================================================================================================================== 00:11:10.093 [2024-11-06T11:17:41.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 23551 00:11:10.093 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:10.351 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:10.610 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:10.610 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:10.868 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:10.868 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:10.868 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 19497 00:11:10.868 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 19497 00:11:11.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 19497 Killed "${NVMF_APP[@]}" "$@" 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=25917 00:11:11.126 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 25917 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 25917 ']' 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.127 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:11.127 [2024-11-06 12:17:42.592495] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:11.127 [2024-11-06 12:17:42.592556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.127 [2024-11-06 12:17:42.696181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.386 [2024-11-06 12:17:42.745274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.386 [2024-11-06 12:17:42.745312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.386 [2024-11-06 12:17:42.745323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.386 [2024-11-06 12:17:42.745331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.386 [2024-11-06 12:17:42.745339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.386 [2024-11-06 12:17:42.746048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.386 12:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:11.644 [2024-11-06 12:17:43.051353] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:11.644 [2024-11-06 12:17:43.051468] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:11.644 [2024-11-06 12:17:43.051508] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cd4dd34c-d2dc-479b-94a3-961e1502484d 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=cd4dd34c-d2dc-479b-94a3-961e1502484d 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:11.644 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:11.903 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd4dd34c-d2dc-479b-94a3-961e1502484d -t 2000 00:11:12.161 [ 00:11:12.161 { 00:11:12.161 "name": "cd4dd34c-d2dc-479b-94a3-961e1502484d", 00:11:12.161 "aliases": [ 00:11:12.161 "lvs/lvol" 00:11:12.161 ], 00:11:12.161 "product_name": "Logical Volume", 00:11:12.161 "block_size": 4096, 00:11:12.161 "num_blocks": 38912, 00:11:12.161 "uuid": "cd4dd34c-d2dc-479b-94a3-961e1502484d", 00:11:12.161 "assigned_rate_limits": { 00:11:12.161 "rw_ios_per_sec": 0, 00:11:12.161 "rw_mbytes_per_sec": 0, 00:11:12.161 "r_mbytes_per_sec": 0, 00:11:12.161 "w_mbytes_per_sec": 0 00:11:12.161 }, 00:11:12.161 "claimed": false, 00:11:12.161 "zoned": false, 00:11:12.161 "supported_io_types": { 00:11:12.161 "read": true, 00:11:12.161 "write": true, 00:11:12.161 "unmap": true, 00:11:12.161 "flush": false, 00:11:12.161 "reset": true, 00:11:12.161 "nvme_admin": false, 00:11:12.161 "nvme_io": false, 00:11:12.161 "nvme_io_md": false, 00:11:12.161 "write_zeroes": true, 00:11:12.161 "zcopy": false, 00:11:12.161 "get_zone_info": false, 00:11:12.161 "zone_management": false, 00:11:12.161 "zone_append": false, 00:11:12.161 "compare": false, 00:11:12.161 "compare_and_write": false, 00:11:12.161 "abort": false, 00:11:12.161 "seek_hole": true, 00:11:12.161 "seek_data": true, 00:11:12.161 "copy": false, 00:11:12.161 "nvme_iov_md": false 00:11:12.161 }, 00:11:12.161 "driver_specific": { 00:11:12.161 "lvol": { 00:11:12.161 "lvol_store_uuid": "84b81353-c328-41ad-851e-5deaee20b61e", 00:11:12.161 "base_bdev": "aio_bdev", 00:11:12.161 "thin_provision": false, 00:11:12.161 "num_allocated_clusters": 38, 00:11:12.161 "snapshot": false, 00:11:12.161 "clone": false, 00:11:12.162 "esnap_clone": false 00:11:12.162 } 00:11:12.162 } 00:11:12.162 } 00:11:12.162 ] 00:11:12.162 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:12.162 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:12.162 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:12.419 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:12.419 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:12.420 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:12.678 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:12.678 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:12.936 [2024-11-06 12:17:44.424916] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:12.936 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:13.195 request: 00:11:13.195 { 00:11:13.195 "uuid": "84b81353-c328-41ad-851e-5deaee20b61e", 00:11:13.195 "method": "bdev_lvol_get_lvstores", 00:11:13.195 "req_id": 1 00:11:13.195 } 00:11:13.195 Got JSON-RPC error response 00:11:13.195 response: 00:11:13.195 { 00:11:13.195 "code": -19, 00:11:13.195 "message": "No such device" 00:11:13.195 } 00:11:13.195 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:13.195 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:13.195 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:13.195 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:13.195 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:13.454 aio_bdev 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cd4dd34c-d2dc-479b-94a3-961e1502484d 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=cd4dd34c-d2dc-479b-94a3-961e1502484d 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.454 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:13.712 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd4dd34c-d2dc-479b-94a3-961e1502484d -t 2000 00:11:13.971 [ 00:11:13.971 { 00:11:13.971 "name": "cd4dd34c-d2dc-479b-94a3-961e1502484d", 00:11:13.971 "aliases": [ 00:11:13.971 "lvs/lvol" 00:11:13.971 ], 00:11:13.971 "product_name": "Logical Volume", 00:11:13.971 "block_size": 4096, 00:11:13.971 "num_blocks": 38912, 00:11:13.971 "uuid": "cd4dd34c-d2dc-479b-94a3-961e1502484d", 00:11:13.971 "assigned_rate_limits": { 00:11:13.971 "rw_ios_per_sec": 0, 00:11:13.971 "rw_mbytes_per_sec": 0, 00:11:13.971 "r_mbytes_per_sec": 0, 00:11:13.971 "w_mbytes_per_sec": 0 00:11:13.971 }, 00:11:13.971 "claimed": false, 00:11:13.971 "zoned": false, 00:11:13.971 "supported_io_types": { 00:11:13.971 "read": true, 00:11:13.971 "write": true, 00:11:13.971 "unmap": true, 00:11:13.971 "flush": false, 00:11:13.971 "reset": true, 00:11:13.971 "nvme_admin": false, 00:11:13.971 "nvme_io": false, 00:11:13.971 "nvme_io_md": false, 00:11:13.971 "write_zeroes": true, 00:11:13.971 "zcopy": false, 00:11:13.971 "get_zone_info": false, 00:11:13.971 "zone_management": false, 00:11:13.971 "zone_append": false, 00:11:13.971 "compare": false, 00:11:13.971 "compare_and_write": false, 00:11:13.971 "abort": false, 00:11:13.971 "seek_hole": true, 00:11:13.971 "seek_data": true, 00:11:13.971 "copy": false, 00:11:13.971 "nvme_iov_md": false 00:11:13.971 }, 00:11:13.971 "driver_specific": { 00:11:13.971 "lvol": { 00:11:13.971 "lvol_store_uuid": "84b81353-c328-41ad-851e-5deaee20b61e", 00:11:13.971 "base_bdev": "aio_bdev", 00:11:13.971 "thin_provision": false, 00:11:13.971 "num_allocated_clusters": 38, 00:11:13.971 "snapshot": false, 00:11:13.971 "clone": false, 00:11:13.971 "esnap_clone": false 00:11:13.971 } 00:11:13.971 } 00:11:13.971 } 00:11:13.971 ] 00:11:13.971 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:13.971 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:13.971 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:14.231 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:14.231 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:14.231 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:14.490 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:14.490 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd4dd34c-d2dc-479b-94a3-961e1502484d 00:11:15.057 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84b81353-c328-41ad-851e-5deaee20b61e 00:11:15.057 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:15.625 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:15.625 00:11:15.625 real 0m19.322s 00:11:15.625 user 0m50.285s 00:11:15.625 sys 0m3.821s 00:11:15.625 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.625 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.625 ************************************ 00:11:15.625 END TEST lvs_grow_dirty 00:11:15.625 ************************************ 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:15.625 nvmf_trace.0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.625 rmmod nvme_tcp 00:11:15.625 rmmod nvme_fabrics 00:11:15.625 rmmod nvme_keyring 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 25917 ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 25917 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 25917 ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 25917 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 25917 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 25917' 00:11:15.625 killing process with pid 25917 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 25917 00:11:15.625 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 25917 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.885 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.860 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.860 00:11:17.860 real 0m46.412s 00:11:17.860 user 1m14.505s 00:11:17.860 sys 0m10.065s 00:11:17.860 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.860 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:17.860 ************************************ 00:11:17.860 END TEST nvmf_lvs_grow 00:11:17.860 ************************************ 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.124 ************************************ 00:11:18.124 START TEST nvmf_bdev_io_wait 00:11:18.124 ************************************ 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:18.124 * Looking for test storage... 00:11:18.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.124 --rc genhtml_branch_coverage=1 00:11:18.124 --rc genhtml_function_coverage=1 00:11:18.124 --rc genhtml_legend=1 00:11:18.124 --rc geninfo_all_blocks=1 00:11:18.124 --rc geninfo_unexecuted_blocks=1 00:11:18.124 00:11:18.124 ' 00:11:18.124 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.124 --rc genhtml_branch_coverage=1 00:11:18.124 --rc genhtml_function_coverage=1 00:11:18.124 --rc genhtml_legend=1 00:11:18.124 --rc geninfo_all_blocks=1 00:11:18.124 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.125 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.385 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.385 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.385 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:18.385 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.386 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:23.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:23.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:23.657 Found net devices under 0000:af:00.0: cvl_0_0 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:23.657 Found net devices under 0000:af:00.1: cvl_0_1 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.657 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.658 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:11:23.916 00:11:23.916 --- 10.0.0.2 ping statistics --- 00:11:23.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.916 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:11:23.916 00:11:23.916 --- 10.0.0.1 ping statistics --- 00:11:23.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.916 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.916 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=30402 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 30402 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 30402 ']' 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.175 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.175 [2024-11-06 12:17:55.608147] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:24.175 [2024-11-06 12:17:55.608204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.175 [2024-11-06 12:17:55.709627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.175 [2024-11-06 12:17:55.761852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.175 [2024-11-06 12:17:55.761892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.175 [2024-11-06 12:17:55.761902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.175 [2024-11-06 12:17:55.761911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.175 [2024-11-06 12:17:55.761919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.175 [2024-11-06 12:17:55.763817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.175 [2024-11-06 12:17:55.763910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.175 [2024-11-06 12:17:55.764017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.175 [2024-11-06 12:17:55.764006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:24.434 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 [2024-11-06 12:17:55.947975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 Malloc0 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.435 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.435 [2024-11-06 12:17:56.004818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=30526 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=30528 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.435 { 00:11:24.435 "params": { 00:11:24.435 "name": "Nvme$subsystem", 00:11:24.435 "trtype": "$TEST_TRANSPORT", 00:11:24.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.435 "adrfam": "ipv4", 00:11:24.435 "trsvcid": "$NVMF_PORT", 00:11:24.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.435 "hdgst": ${hdgst:-false}, 00:11:24.435 "ddgst": ${ddgst:-false} 00:11:24.435 }, 00:11:24.435 "method": "bdev_nvme_attach_controller" 00:11:24.435 } 00:11:24.435 EOF 00:11:24.435 )") 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=30530 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=30533 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.435 { 00:11:24.435 "params": { 00:11:24.435 "name": "Nvme$subsystem", 00:11:24.435 "trtype": "$TEST_TRANSPORT", 00:11:24.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.435 "adrfam": "ipv4", 00:11:24.435 "trsvcid": "$NVMF_PORT", 00:11:24.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.435 "hdgst": ${hdgst:-false}, 00:11:24.435 "ddgst": ${ddgst:-false} 00:11:24.435 }, 00:11:24.435 "method": "bdev_nvme_attach_controller" 00:11:24.435 } 00:11:24.435 EOF 00:11:24.435 )") 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.435 { 00:11:24.435 "params": { 00:11:24.435 "name": "Nvme$subsystem", 00:11:24.435 "trtype": "$TEST_TRANSPORT", 00:11:24.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.435 "adrfam": "ipv4", 00:11:24.435 "trsvcid": "$NVMF_PORT", 00:11:24.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.435 "hdgst": ${hdgst:-false}, 00:11:24.435 "ddgst": ${ddgst:-false} 00:11:24.435 }, 00:11:24.435 "method": "bdev_nvme_attach_controller" 00:11:24.435 } 00:11:24.435 EOF 00:11:24.435 )") 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.435 { 00:11:24.435 "params": { 00:11:24.435 "name": "Nvme$subsystem", 00:11:24.435 "trtype": "$TEST_TRANSPORT", 00:11:24.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.435 "adrfam": "ipv4", 00:11:24.435 "trsvcid": "$NVMF_PORT", 00:11:24.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.435 "hdgst": ${hdgst:-false}, 00:11:24.435 "ddgst": ${ddgst:-false} 00:11:24.435 }, 00:11:24.435 "method": "bdev_nvme_attach_controller" 00:11:24.435 } 00:11:24.435 EOF 00:11:24.435 )") 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 30526 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.435 "params": { 00:11:24.435 "name": "Nvme1", 00:11:24.435 "trtype": "tcp", 00:11:24.435 "traddr": "10.0.0.2", 00:11:24.435 "adrfam": "ipv4", 00:11:24.435 "trsvcid": "4420", 00:11:24.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.435 "hdgst": false, 00:11:24.435 "ddgst": false 00:11:24.435 }, 00:11:24.435 "method": "bdev_nvme_attach_controller" 00:11:24.435 }' 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.435 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.435 "params": { 00:11:24.436 "name": "Nvme1", 00:11:24.436 "trtype": "tcp", 00:11:24.436 "traddr": "10.0.0.2", 00:11:24.436 "adrfam": "ipv4", 00:11:24.436 "trsvcid": "4420", 00:11:24.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.436 "hdgst": false, 00:11:24.436 "ddgst": false 00:11:24.436 }, 00:11:24.436 "method": "bdev_nvme_attach_controller" 00:11:24.436 }' 00:11:24.436 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.436 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.436 "params": { 00:11:24.436 "name": "Nvme1", 00:11:24.436 "trtype": "tcp", 00:11:24.436 "traddr": "10.0.0.2", 00:11:24.436 "adrfam": "ipv4", 00:11:24.436 "trsvcid": "4420", 00:11:24.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.436 "hdgst": false, 00:11:24.436 "ddgst": false 00:11:24.436 }, 00:11:24.436 "method": "bdev_nvme_attach_controller" 00:11:24.436 }' 00:11:24.436 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.436 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.436 "params": { 00:11:24.436 "name": "Nvme1", 00:11:24.436 "trtype": "tcp", 00:11:24.436 "traddr": "10.0.0.2", 00:11:24.436 "adrfam": "ipv4", 00:11:24.436 "trsvcid": "4420", 00:11:24.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.436 "hdgst": false, 00:11:24.436 "ddgst": false 00:11:24.436 }, 00:11:24.436 "method": "bdev_nvme_attach_controller" 00:11:24.436 }' 00:11:24.695 [2024-11-06 12:17:56.061302] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:24.695 [2024-11-06 12:17:56.061359] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:24.695 [2024-11-06 12:17:56.062370] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:24.695 [2024-11-06 12:17:56.062434] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:24.695 [2024-11-06 12:17:56.063862] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:24.695 [2024-11-06 12:17:56.063919] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:24.695 [2024-11-06 12:17:56.064909] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:24.695 [2024-11-06 12:17:56.064967] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:24.695 [2024-11-06 12:17:56.242647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.695 [2024-11-06 12:17:56.284212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.695 [2024-11-06 12:17:56.304668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.954 [2024-11-06 12:17:56.344813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.954 [2024-11-06 12:17:56.432276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.954 [2024-11-06 12:17:56.489506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.954 [2024-11-06 12:17:56.501243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:24.954 [2024-11-06 12:17:56.538709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:25.213 Running I/O for 1 seconds... 00:11:25.213 Running I/O for 1 seconds... 00:11:25.213 Running I/O for 1 seconds... 00:11:25.213 Running I/O for 1 seconds... 00:11:26.150 9060.00 IOPS, 35.39 MiB/s [2024-11-06T11:17:57.765Z] 163624.00 IOPS, 639.16 MiB/s 00:11:26.150 Latency(us) 00:11:26.150 [2024-11-06T11:17:57.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.150 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:26.150 Nvme1n1 : 1.00 163246.74 637.68 0.00 0.00 779.85 348.16 2308.65 00:11:26.150 [2024-11-06T11:17:57.765Z] =================================================================================================================== 00:11:26.150 [2024-11-06T11:17:57.765Z] Total : 163246.74 637.68 0.00 0.00 779.85 348.16 2308.65 00:11:26.150 00:11:26.150 Latency(us) 00:11:26.150 [2024-11-06T11:17:57.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.150 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:26.150 Nvme1n1 : 1.06 8666.23 33.85 0.00 0.00 14062.05 5838.66 60293.12 00:11:26.150 [2024-11-06T11:17:57.765Z] =================================================================================================================== 00:11:26.150 [2024-11-06T11:17:57.765Z] Total : 8666.23 33.85 0.00 0.00 14062.05 5838.66 60293.12 00:11:26.409 8156.00 IOPS, 31.86 MiB/s 00:11:26.409 Latency(us) 00:11:26.409 [2024-11-06T11:17:58.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.409 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:26.409 Nvme1n1 : 1.01 8255.46 32.25 0.00 0.00 15465.58 3842.79 28597.53 00:11:26.409 [2024-11-06T11:17:58.024Z] =================================================================================================================== 00:11:26.409 [2024-11-06T11:17:58.024Z] Total : 8255.46 32.25 0.00 0.00 15465.58 3842.79 28597.53 00:11:26.409 9012.00 IOPS, 35.20 MiB/s 00:11:26.409 Latency(us) 00:11:26.409 [2024-11-06T11:17:58.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.409 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:26.409 Nvme1n1 : 1.01 9092.22 35.52 0.00 0.00 14030.25 4408.79 23950.43 00:11:26.409 [2024-11-06T11:17:58.024Z] =================================================================================================================== 00:11:26.409 [2024-11-06T11:17:58.024Z] Total : 9092.22 35.52 0.00 0.00 14030.25 4408.79 23950.43 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 30528 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 30530 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 30533 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.409 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.409 rmmod nvme_tcp 00:11:26.409 rmmod nvme_fabrics 00:11:26.409 rmmod nvme_keyring 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 30402 ']' 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 30402 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 30402 ']' 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 30402 00:11:26.409 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:26.410 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.410 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 30402 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 30402' 00:11:26.668 killing process with pid 30402 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 30402 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 30402 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.668 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.669 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.202 00:11:29.202 real 0m10.796s 00:11:29.202 user 0m16.971s 00:11:29.202 sys 0m6.149s 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:29.202 ************************************ 00:11:29.202 END TEST nvmf_bdev_io_wait 00:11:29.202 ************************************ 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.202 ************************************ 00:11:29.202 START TEST nvmf_queue_depth 00:11:29.202 ************************************ 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:29.202 * Looking for test storage... 00:11:29.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:29.202 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.203 --rc genhtml_branch_coverage=1 00:11:29.203 --rc genhtml_function_coverage=1 00:11:29.203 --rc genhtml_legend=1 00:11:29.203 --rc geninfo_all_blocks=1 00:11:29.203 --rc geninfo_unexecuted_blocks=1 00:11:29.203 00:11:29.203 ' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.203 --rc genhtml_branch_coverage=1 00:11:29.203 --rc genhtml_function_coverage=1 00:11:29.203 --rc genhtml_legend=1 00:11:29.203 --rc geninfo_all_blocks=1 00:11:29.203 --rc geninfo_unexecuted_blocks=1 00:11:29.203 00:11:29.203 ' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.203 --rc genhtml_branch_coverage=1 00:11:29.203 --rc genhtml_function_coverage=1 00:11:29.203 --rc genhtml_legend=1 00:11:29.203 --rc geninfo_all_blocks=1 00:11:29.203 --rc geninfo_unexecuted_blocks=1 00:11:29.203 00:11:29.203 ' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.203 --rc genhtml_branch_coverage=1 00:11:29.203 --rc genhtml_function_coverage=1 00:11:29.203 --rc genhtml_legend=1 00:11:29.203 --rc geninfo_all_blocks=1 00:11:29.203 --rc geninfo_unexecuted_blocks=1 00:11:29.203 00:11:29.203 ' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.203 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.204 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.204 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.204 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.204 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.204 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.474 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:34.474 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:34.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:34.475 Found net devices under 0000:af:00.0: cvl_0_0 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:34.475 Found net devices under 0000:af:00.1: cvl_0_1 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.475 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:11:34.734 00:11:34.734 --- 10.0.0.2 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:11:34.734 00:11:34.734 --- 10.0.0.1 ping statistics --- 00:11:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.734 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.734 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=34680 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 34680 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 34680 ']' 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.993 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:34.993 [2024-11-06 12:18:06.446065] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:34.993 [2024-11-06 12:18:06.446124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.993 [2024-11-06 12:18:06.521188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.993 [2024-11-06 12:18:06.560198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.993 [2024-11-06 12:18:06.560233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.993 [2024-11-06 12:18:06.560239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.993 [2024-11-06 12:18:06.560245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.993 [2024-11-06 12:18:06.560250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.993 [2024-11-06 12:18:06.560825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.252 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.252 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:35.252 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.252 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.252 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 [2024-11-06 12:18:06.714101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 Malloc0 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 [2024-11-06 12:18:06.756158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=34741 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 34741 /var/tmp/bdevperf.sock 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 34741 ']' 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:35.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.253 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.253 [2024-11-06 12:18:06.787580] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:11:35.253 [2024-11-06 12:18:06.787620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34741 ] 00:11:35.253 [2024-11-06 12:18:06.869775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.511 [2024-11-06 12:18:06.920730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.078 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.078 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:36.078 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:36.078 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.337 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.337 NVMe0n1 00:11:36.337 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.337 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:36.596 Running I/O for 10 seconds... 00:11:38.495 10143.00 IOPS, 39.62 MiB/s [2024-11-06T11:18:11.046Z] 10245.00 IOPS, 40.02 MiB/s [2024-11-06T11:18:12.422Z] 10348.67 IOPS, 40.42 MiB/s [2024-11-06T11:18:12.989Z] 10463.50 IOPS, 40.87 MiB/s [2024-11-06T11:18:14.366Z] 10445.60 IOPS, 40.80 MiB/s [2024-11-06T11:18:15.302Z] 10502.00 IOPS, 41.02 MiB/s [2024-11-06T11:18:16.237Z] 10529.14 IOPS, 41.13 MiB/s [2024-11-06T11:18:17.174Z] 10535.50 IOPS, 41.15 MiB/s [2024-11-06T11:18:18.109Z] 10575.56 IOPS, 41.31 MiB/s [2024-11-06T11:18:18.109Z] 10589.90 IOPS, 41.37 MiB/s 00:11:46.494 Latency(us) 00:11:46.494 [2024-11-06T11:18:18.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.494 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:46.494 Verification LBA range: start 0x0 length 0x4000 00:11:46.494 NVMe0n1 : 10.05 10618.60 41.48 0.00 0.00 96043.69 6702.55 66250.94 00:11:46.494 [2024-11-06T11:18:18.109Z] =================================================================================================================== 00:11:46.494 [2024-11-06T11:18:18.109Z] Total : 10618.60 41.48 0.00 0.00 96043.69 6702.55 66250.94 00:11:46.494 { 00:11:46.494 "results": [ 00:11:46.494 { 00:11:46.494 "job": "NVMe0n1", 00:11:46.494 "core_mask": "0x1", 00:11:46.494 "workload": "verify", 00:11:46.494 "status": "finished", 00:11:46.494 "verify_range": { 00:11:46.494 "start": 0, 00:11:46.494 "length": 16384 00:11:46.494 }, 00:11:46.494 "queue_depth": 1024, 00:11:46.494 "io_size": 4096, 00:11:46.494 "runtime": 10.045489, 00:11:46.494 "iops": 10618.597063816405, 00:11:46.494 "mibps": 41.47889478053283, 00:11:46.494 "io_failed": 0, 00:11:46.494 "io_timeout": 0, 00:11:46.494 "avg_latency_us": 96043.69432564118, 00:11:46.494 "min_latency_us": 6702.545454545455, 00:11:46.494 "max_latency_us": 66250.93818181819 00:11:46.494 } 00:11:46.494 ], 00:11:46.494 "core_count": 1 00:11:46.494 } 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 34741 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 34741 ']' 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 34741 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.494 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 34741 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 34741' 00:11:46.754 killing process with pid 34741 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 34741 00:11:46.754 Received shutdown signal, test time was about 10.000000 seconds 00:11:46.754 00:11:46.754 Latency(us) 00:11:46.754 [2024-11-06T11:18:18.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.754 [2024-11-06T11:18:18.369Z] =================================================================================================================== 00:11:46.754 [2024-11-06T11:18:18.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 34741 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.754 rmmod nvme_tcp 00:11:46.754 rmmod nvme_fabrics 00:11:46.754 rmmod nvme_keyring 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 34680 ']' 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 34680 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 34680 ']' 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 34680 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.754 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 34680 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 34680' 00:11:47.013 killing process with pid 34680 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 34680 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 34680 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.013 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.549 00:11:49.549 real 0m20.280s 00:11:49.549 user 0m24.846s 00:11:49.549 sys 0m5.845s 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:49.549 ************************************ 00:11:49.549 END TEST nvmf_queue_depth 00:11:49.549 ************************************ 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:49.549 ************************************ 00:11:49.549 START TEST nvmf_target_multipath 00:11:49.549 ************************************ 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:49.549 * Looking for test storage... 00:11:49.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.549 --rc genhtml_branch_coverage=1 00:11:49.549 --rc genhtml_function_coverage=1 00:11:49.549 --rc genhtml_legend=1 00:11:49.549 --rc geninfo_all_blocks=1 00:11:49.549 --rc geninfo_unexecuted_blocks=1 00:11:49.549 00:11:49.549 ' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.549 --rc genhtml_branch_coverage=1 00:11:49.549 --rc genhtml_function_coverage=1 00:11:49.549 --rc genhtml_legend=1 00:11:49.549 --rc geninfo_all_blocks=1 00:11:49.549 --rc geninfo_unexecuted_blocks=1 00:11:49.549 00:11:49.549 ' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.549 --rc genhtml_branch_coverage=1 00:11:49.549 --rc genhtml_function_coverage=1 00:11:49.549 --rc genhtml_legend=1 00:11:49.549 --rc geninfo_all_blocks=1 00:11:49.549 --rc geninfo_unexecuted_blocks=1 00:11:49.549 00:11:49.549 ' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.549 --rc genhtml_branch_coverage=1 00:11:49.549 --rc genhtml_function_coverage=1 00:11:49.549 --rc genhtml_legend=1 00:11:49.549 --rc geninfo_all_blocks=1 00:11:49.549 --rc geninfo_unexecuted_blocks=1 00:11:49.549 00:11:49.549 ' 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.549 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.550 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.820 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:54.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:54.821 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:54.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:54.821 Found net devices under 0000:af:00.0: cvl_0_0 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:54.821 Found net devices under 0000:af:00.1: cvl_0_1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.821 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:11:54.822 00:11:54.822 --- 10.0.0.2 ping statistics --- 00:11:54.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.822 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:11:54.822 00:11:54.822 --- 10.0.0.1 ping statistics --- 00:11:54.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.822 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:54.822 only one NIC for nvmf test 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.822 rmmod nvme_tcp 00:11:54.822 rmmod nvme_fabrics 00:11:54.822 rmmod nvme_keyring 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.822 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:57.356 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.357 00:11:57.357 real 0m7.741s 00:11:57.357 user 0m1.599s 00:11:57.357 sys 0m4.034s 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:57.357 ************************************ 00:11:57.357 END TEST nvmf_target_multipath 00:11:57.357 ************************************ 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:57.357 ************************************ 00:11:57.357 START TEST nvmf_zcopy 00:11:57.357 ************************************ 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:57.357 * Looking for test storage... 00:11:57.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:57.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.357 --rc genhtml_branch_coverage=1 00:11:57.357 --rc genhtml_function_coverage=1 00:11:57.357 --rc genhtml_legend=1 00:11:57.357 --rc geninfo_all_blocks=1 00:11:57.357 --rc geninfo_unexecuted_blocks=1 00:11:57.357 00:11:57.357 ' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:57.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.357 --rc genhtml_branch_coverage=1 00:11:57.357 --rc genhtml_function_coverage=1 00:11:57.357 --rc genhtml_legend=1 00:11:57.357 --rc geninfo_all_blocks=1 00:11:57.357 --rc geninfo_unexecuted_blocks=1 00:11:57.357 00:11:57.357 ' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:57.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.357 --rc genhtml_branch_coverage=1 00:11:57.357 --rc genhtml_function_coverage=1 00:11:57.357 --rc genhtml_legend=1 00:11:57.357 --rc geninfo_all_blocks=1 00:11:57.357 --rc geninfo_unexecuted_blocks=1 00:11:57.357 00:11:57.357 ' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:57.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.357 --rc genhtml_branch_coverage=1 00:11:57.357 --rc genhtml_function_coverage=1 00:11:57.357 --rc genhtml_legend=1 00:11:57.357 --rc geninfo_all_blocks=1 00:11:57.357 --rc geninfo_unexecuted_blocks=1 00:11:57.357 00:11:57.357 ' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.357 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.358 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.804 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.805 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.805 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:12:02.805 00:12:02.805 --- 10.0.0.2 ping statistics --- 00:12:02.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.805 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:12:02.805 00:12:02.805 --- 10.0.0.1 ping statistics --- 00:12:02.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.805 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.805 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=44483 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 44483 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 44483 ']' 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.118 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.118 [2024-11-06 12:18:34.499685] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:12:03.118 [2024-11-06 12:18:34.499751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.118 [2024-11-06 12:18:34.578526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.118 [2024-11-06 12:18:34.617815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.118 [2024-11-06 12:18:34.617846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.118 [2024-11-06 12:18:34.617852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.118 [2024-11-06 12:18:34.617858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.118 [2024-11-06 12:18:34.617863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.118 [2024-11-06 12:18:34.618307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 [2024-11-06 12:18:34.763829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 [2024-11-06 12:18:34.780044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 malloc0 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:03.430 { 00:12:03.430 "params": { 00:12:03.430 "name": "Nvme$subsystem", 00:12:03.430 "trtype": "$TEST_TRANSPORT", 00:12:03.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.430 "adrfam": "ipv4", 00:12:03.430 "trsvcid": "$NVMF_PORT", 00:12:03.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.430 "hdgst": ${hdgst:-false}, 00:12:03.430 "ddgst": ${ddgst:-false} 00:12:03.430 }, 00:12:03.430 "method": "bdev_nvme_attach_controller" 00:12:03.430 } 00:12:03.430 EOF 00:12:03.430 )") 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:03.430 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:03.430 "params": { 00:12:03.430 "name": "Nvme1", 00:12:03.430 "trtype": "tcp", 00:12:03.430 "traddr": "10.0.0.2", 00:12:03.430 "adrfam": "ipv4", 00:12:03.430 "trsvcid": "4420", 00:12:03.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.430 "hdgst": false, 00:12:03.430 "ddgst": false 00:12:03.430 }, 00:12:03.430 "method": "bdev_nvme_attach_controller" 00:12:03.430 }' 00:12:03.430 [2024-11-06 12:18:34.864040] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:12:03.430 [2024-11-06 12:18:34.864098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44662 ] 00:12:03.430 [2024-11-06 12:18:34.956502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.430 [2024-11-06 12:18:35.006550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.721 Running I/O for 10 seconds... 00:12:06.034 8315.00 IOPS, 64.96 MiB/s [2024-11-06T11:18:38.586Z] 8377.50 IOPS, 65.45 MiB/s [2024-11-06T11:18:39.522Z] 8393.00 IOPS, 65.57 MiB/s [2024-11-06T11:18:40.461Z] 8403.50 IOPS, 65.65 MiB/s [2024-11-06T11:18:41.397Z] 8409.60 IOPS, 65.70 MiB/s [2024-11-06T11:18:42.334Z] 8413.33 IOPS, 65.73 MiB/s [2024-11-06T11:18:43.269Z] 8417.14 IOPS, 65.76 MiB/s [2024-11-06T11:18:44.646Z] 8419.00 IOPS, 65.77 MiB/s [2024-11-06T11:18:45.581Z] 8419.44 IOPS, 65.78 MiB/s [2024-11-06T11:18:45.581Z] 8424.60 IOPS, 65.82 MiB/s 00:12:13.966 Latency(us) 00:12:13.966 [2024-11-06T11:18:45.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.966 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:13.966 Verification LBA range: start 0x0 length 0x1000 00:12:13.966 Nvme1n1 : 10.01 8425.13 65.82 0.00 0.00 15131.04 1295.83 22401.40 00:12:13.966 [2024-11-06T11:18:45.581Z] =================================================================================================================== 00:12:13.966 [2024-11-06T11:18:45.581Z] Total : 8425.13 65.82 0.00 0.00 15131.04 1295.83 22401.40 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=46504 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:13.966 { 00:12:13.966 "params": { 00:12:13.966 "name": "Nvme$subsystem", 00:12:13.966 "trtype": "$TEST_TRANSPORT", 00:12:13.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:13.966 "adrfam": "ipv4", 00:12:13.966 "trsvcid": "$NVMF_PORT", 00:12:13.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:13.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:13.966 "hdgst": ${hdgst:-false}, 00:12:13.966 "ddgst": ${ddgst:-false} 00:12:13.966 }, 00:12:13.966 "method": "bdev_nvme_attach_controller" 00:12:13.966 } 00:12:13.966 EOF 00:12:13.966 )") 00:12:13.966 [2024-11-06 12:18:45.433381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.966 [2024-11-06 12:18:45.433415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:13.966 [2024-11-06 12:18:45.441369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.966 [2024-11-06 12:18:45.441382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:13.966 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:13.966 "params": { 00:12:13.967 "name": "Nvme1", 00:12:13.967 "trtype": "tcp", 00:12:13.967 "traddr": "10.0.0.2", 00:12:13.967 "adrfam": "ipv4", 00:12:13.967 "trsvcid": "4420", 00:12:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:13.967 "hdgst": false, 00:12:13.967 "ddgst": false 00:12:13.967 }, 00:12:13.967 "method": "bdev_nvme_attach_controller" 00:12:13.967 }' 00:12:13.967 [2024-11-06 12:18:45.449386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.449396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.454725] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:12:13.967 [2024-11-06 12:18:45.454767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46504 ] 00:12:13.967 [2024-11-06 12:18:45.457407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.457416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.465428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.465439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.473449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.473462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.481475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.481485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.489493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.489502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.497512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.497521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.505533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.505541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.513553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.513562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.521584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.521599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.529597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.529606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.536973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.967 [2024-11-06 12:18:45.537618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.537639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.545641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.545652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.553661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.553673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.561681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.561690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.569702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.569711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.967 [2024-11-06 12:18:45.577734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.967 [2024-11-06 12:18:45.577744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.585748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.585761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.586044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.226 [2024-11-06 12:18:45.593768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.593780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.601799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.601816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.609815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.609832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.617836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.617850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.625856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.625868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.633877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.633891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.641896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.641904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.649918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.649929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.657939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.657948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.665959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.665968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.673982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.673990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.682018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.682036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.690030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.690042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.698048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.698065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.706070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.706082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.226 [2024-11-06 12:18:45.714092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.226 [2024-11-06 12:18:45.714101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.722112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.722121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.730134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.730143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.738155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.738164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.746180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.746192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.754203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.754215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.762228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.762242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.770246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.770255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.778275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.778292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.786291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.786302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 Running I/O for 5 seconds... 00:12:14.227 [2024-11-06 12:18:45.794311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.794320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.804633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.804651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.813286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.813304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.822523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.822541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.831205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.831222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.227 [2024-11-06 12:18:45.840411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.227 [2024-11-06 12:18:45.840428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.849264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.849281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.857904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.857922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.866504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.866522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.875526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.875544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.884517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.884535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.893266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.893283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.902252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.902270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.911244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.911263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.920318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.486 [2024-11-06 12:18:45.920336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.486 [2024-11-06 12:18:45.929424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.929443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.938625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.938644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.947969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.947987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.957264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.957283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.965959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.965978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.974999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.975019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.984117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.984136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:45.993212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:45.993230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.002422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.002442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.011607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.011625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.020564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.020583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.029221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.029239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.037841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.037859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.046482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.046500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.055417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.055434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.064762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.064780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.073209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.073225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.082399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.082416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.091113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.091131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.487 [2024-11-06 12:18:46.099796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.487 [2024-11-06 12:18:46.099815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.108415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.108433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.116842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.116860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.125812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.125830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.134515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.134532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.143855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.143873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.153112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.153130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.161875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.161892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.171070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.171088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.180203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.180221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.189192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.189211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.198110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.198128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.206614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.206632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.215573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.215591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.224583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.224602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.233611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.233628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.242772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.242791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.251817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.251835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.260324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.260341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.269177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.269194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.746 [2024-11-06 12:18:46.278428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.746 [2024-11-06 12:18:46.278446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.286599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.286617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.295787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.295805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.304707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.304724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.313484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.313503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.322422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.322440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.331389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.331407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.339878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.339895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.348604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.348622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.747 [2024-11-06 12:18:46.357856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.747 [2024-11-06 12:18:46.357878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.366907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.366925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.376257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.376275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.385171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.385188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.394142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.394160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.402937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.402954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.411465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.411482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.420101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.420118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.428846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.428863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.437519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.437536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.446537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.446555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.455613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.455631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.465136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.465154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.474314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.474332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.483300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.483318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.492301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.492318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.501434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.501451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.005 [2024-11-06 12:18:46.510483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.005 [2024-11-06 12:18:46.510501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.519664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.519682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.528873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.528895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.537898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.537916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.546861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.546879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.555929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.555947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.564887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.564904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.573933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.573950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.583644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.583661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.592454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.592477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.601532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.601549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.610175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.610193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.006 [2024-11-06 12:18:46.619136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.006 [2024-11-06 12:18:46.619153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.627659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.627677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.636811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.636829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.645945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.645962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.654373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.654389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.663500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.663517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.672513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.672530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.681734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.681751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.690813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.690831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.699584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.699605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.708624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.708641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.717732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.717750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.726501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.726518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.735673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.735691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.744670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.744687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.753614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.753632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.762242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.762259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.770856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.770873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.779921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.779938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.789034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.789052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 18323.00 IOPS, 143.15 MiB/s [2024-11-06T11:18:46.880Z] [2024-11-06 12:18:46.798568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.798586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.807710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.807727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.816898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.816914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.826148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.826165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.835351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.835369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.844535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.844552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.853603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.853620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.862190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.862208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.870712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.870730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.265 [2024-11-06 12:18:46.879987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.265 [2024-11-06 12:18:46.880005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.888653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.888671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.897969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.897987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.907245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.907263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.916393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.916413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.924936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.924954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.934045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.934064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.943080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.943097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.952302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.952319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.960948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.960965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.524 [2024-11-06 12:18:46.969539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.524 [2024-11-06 12:18:46.969556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:46.978662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:46.978679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:46.987738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:46.987755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:46.996787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:46.996804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.004842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.004859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.013229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.013246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.022306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.022323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.031500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.031517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.040009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.040026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.048597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.048615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.057348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.057366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.066440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.066463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.075738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.075755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.084810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.084828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.093790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.093808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.103028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.103047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.112708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.112727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.122572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.122591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.131525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.131545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.525 [2024-11-06 12:18:47.140160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.525 [2024-11-06 12:18:47.140179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.783 [2024-11-06 12:18:47.148745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.783 [2024-11-06 12:18:47.148762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.783 [2024-11-06 12:18:47.157976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.783 [2024-11-06 12:18:47.157993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.783 [2024-11-06 12:18:47.167510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.783 [2024-11-06 12:18:47.167528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.783 [2024-11-06 12:18:47.176799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.783 [2024-11-06 12:18:47.176817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.783 [2024-11-06 12:18:47.185806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.783 [2024-11-06 12:18:47.185824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.194890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.194908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.203972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.203989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.212933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.212950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.222041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.222059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.231094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.231112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.240360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.240377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.249402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.249419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.257971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.257987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.266478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.266496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.275585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.275603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.285184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.285201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.294309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.294326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.303327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.303344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.312336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.312353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.321372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.321389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.330292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.330309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.339239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.339256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.348251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.348269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.357119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.357137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.366263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.366280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.375464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.375487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.384356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.384374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.784 [2024-11-06 12:18:47.393367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.784 [2024-11-06 12:18:47.393385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.402331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.402349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.410952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.410971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.420069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.420088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.428961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.428979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.437423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.437442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.445957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.445976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.454521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.454541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.463392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.463411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.472376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.472394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.481278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.481296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.490424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.490443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.499513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.499535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.508656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.508674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.517770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.517788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.526794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.526812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.535635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.535655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.544661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.544683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.552755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.552773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.561846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.561864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.570863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.570881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.043 [2024-11-06 12:18:47.579930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.043 [2024-11-06 12:18:47.579948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.588719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.588737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.598004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.598021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.606655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.606673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.615784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.615802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.624490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.624508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.634000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.634019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.643065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.643083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.044 [2024-11-06 12:18:47.651707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.044 [2024-11-06 12:18:47.651726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.660544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.660562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.669052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.669071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.677667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.677685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.686922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.686940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.696067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.696085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.705435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.705453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.714617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.714640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.723793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.723811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.732936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.732954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.742277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.742295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.751560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.751578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.760272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.760289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.769822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.769840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.779142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.779159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.788314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.788332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.797327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.797345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 18371.50 IOPS, 143.53 MiB/s [2024-11-06T11:18:47.918Z] [2024-11-06 12:18:47.805810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.805828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.814976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.814993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.824061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.824079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.833232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.833250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.842271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.842290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.851412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.851430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.860108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.860125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.869074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.869092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.877744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.877761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.886998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.887020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.896213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.896230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.905218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.905234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.303 [2024-11-06 12:18:47.914354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.303 [2024-11-06 12:18:47.914370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.922935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.922952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.932412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.932429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.941131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.941149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.950221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.950239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.958904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.958922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.967417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.967435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.975948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.975966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.984994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.985012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:47.994501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:47.994519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:48.003311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:48.003328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:48.012271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:48.012288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:48.021333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:48.021351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:48.030019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.562 [2024-11-06 12:18:48.030036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.562 [2024-11-06 12:18:48.038769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.038787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.047913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.047930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.056548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.056565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.065732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.065750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.074622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.074639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.083666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.083684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.092589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.092606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.101985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.102003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.110114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.110131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.119662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.119680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.128453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.128476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.137555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.137572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.146696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.146713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.155882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.155900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.164614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.164631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.563 [2024-11-06 12:18:48.173469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.563 [2024-11-06 12:18:48.173502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.182532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.182550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.191720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.191738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.200449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.200472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.209416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.209434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.218381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.218399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.227596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.227614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.236294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.236311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.244982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.244999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.253985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.254003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.262731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.262748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.271727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.271744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.281063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.281080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.290265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.290282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.299591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.299609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.308666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.308683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.317785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.317802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.326998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.327015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.336068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.336086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.345194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.345211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.354229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.354247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.363588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.363606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.372619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.372636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.381584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.381602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.390450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.390472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.399051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.399069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.407969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.407986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.417102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.417120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.425692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.425710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.822 [2024-11-06 12:18:48.434759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.822 [2024-11-06 12:18:48.434777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.081 [2024-11-06 12:18:48.443471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.081 [2024-11-06 12:18:48.443489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.081 [2024-11-06 12:18:48.452513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.081 [2024-11-06 12:18:48.452531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.081 [2024-11-06 12:18:48.461593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.461611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.470816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.470834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.479898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.479915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.488836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.488853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.497943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.497961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.506985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.507002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.516030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.516047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.525082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.525099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.534030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.534047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.543275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.543293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.552416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.552433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.561640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.561665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.571099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.571116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.580090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.580108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.589144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.589161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.598057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.598074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.607226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.607243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.616353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.616371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.625481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.625498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.634177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.634194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.643175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.643193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.651765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.651783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.660352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.660371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.669424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.669442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.678226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.678243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.687296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.687313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.082 [2024-11-06 12:18:48.696407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.082 [2024-11-06 12:18:48.696425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.705553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.705570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.714692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.714709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.723934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.723951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.733138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.733160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.742302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.742319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.751521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.751538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.760756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.760773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.769402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.769419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.778557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.778575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.787785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.787802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.796918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.796936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 18406.33 IOPS, 143.80 MiB/s [2024-11-06T11:18:48.956Z] [2024-11-06 12:18:48.806278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.806295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.814825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.814843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.823797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.823814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.832371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.832388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.841383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.841401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.850518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.850535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.859217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.859236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.867943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.867961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.876464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.876482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.885501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.885520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.894572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.894591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.903803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.903825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.912615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.912633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.921099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.921117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.930249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.930267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.939346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.939364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.948470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.948488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.341 [2024-11-06 12:18:48.957664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.341 [2024-11-06 12:18:48.957682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:48.966961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:48.966980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:48.974906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:48.974924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:48.983923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:48.983941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:48.992611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:48.992629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.002164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.002182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.010861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.010878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.019985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.020003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.029112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.029130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.038251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.038269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.047356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.600 [2024-11-06 12:18:49.047373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.600 [2024-11-06 12:18:49.056442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.056466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.065340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.065358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.074297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.074314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.083195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.083213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.092406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.092423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.100910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.100928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.109380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.109399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.118527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.118545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.127639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.127657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.136737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.136754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.145966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.145985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.154997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.155014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.164044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.164063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.173060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.173078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.182069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.182087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.190596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.190614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.199477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.199494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.601 [2024-11-06 12:18:49.208736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.601 [2024-11-06 12:18:49.208754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.217996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.218015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.226816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.226833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.235976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.235995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.244899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.244917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.254194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.254211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.263146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.263164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.272331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.272348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.281344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.281362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.290305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.290323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.299287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.299305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.308128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.308147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.317303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.317320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.325830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.325848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.335069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.335086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.344208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.344225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.352386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.352403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.360855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.360872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.369877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.369894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.377883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.377899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.386415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.386432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.395480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.860 [2024-11-06 12:18:49.395497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.860 [2024-11-06 12:18:49.404493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.404510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.413138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.413155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.422043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.422061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.430864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.430881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.439951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.439968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.448887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.448903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.458041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.458058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.467277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.467295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.861 [2024-11-06 12:18:49.475954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.861 [2024-11-06 12:18:49.475972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.484374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.484392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.493576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.493594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.502347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.502364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.511516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.511534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.520515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.520533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.529534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.529552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.538662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.538680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.547976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.547994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.557083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.557100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.566020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.566037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.575169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.575186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.584329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.584346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.592905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.592922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.601364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.601382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.610386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.610403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.619527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.619544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.628250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.628267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.636757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.636774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.645299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.645318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.654467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.654484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.662625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.662642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.671611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.671639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.680598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.680616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.689787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.689804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.698313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.698330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.707366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.707384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.716536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.716554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.724917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.724934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.120 [2024-11-06 12:18:49.734158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.120 [2024-11-06 12:18:49.734176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.743288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.743310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.752575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.752593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.761797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.761815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.770519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.770537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.779487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.779505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.793136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.793154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.802093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.802111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 18429.00 IOPS, 143.98 MiB/s [2024-11-06T11:18:49.994Z] [2024-11-06 12:18:49.810242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.810259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.819341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.819359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.828401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.828419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.837048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.837065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.846227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.846244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.855307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.855325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.864609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.864627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.873546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.873564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.882687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.882705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.891189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.379 [2024-11-06 12:18:49.891206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.379 [2024-11-06 12:18:49.899959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.899977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.909213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.909230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.918130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.918152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.926786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.926804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.935861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.935878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.944676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.944694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.953739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.953756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.962914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.962930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.971616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.971634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.980465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.980482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.380 [2024-11-06 12:18:49.989570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.380 [2024-11-06 12:18:49.989588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.638 [2024-11-06 12:18:49.998589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.638 [2024-11-06 12:18:49.998606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.638 [2024-11-06 12:18:50.007932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.007951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.017222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.017241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.026145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.026164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.035770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.035787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.045036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.045053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.054178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.054196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.063260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.063277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.072370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.072388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.081259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.081277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.089847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.089870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.098710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.098728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.107547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.107565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.116639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.116656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.126086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.126104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.134852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.134870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.144040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.144057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.152853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.152870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.162026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.162043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.170965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.170985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.179658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.179676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.188661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.188679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.197339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.197357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.205844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.205862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.214475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.214493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.223541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.223559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.232882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.232900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.241624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.241642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.639 [2024-11-06 12:18:50.251003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.639 [2024-11-06 12:18:50.251020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.259705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.259723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.268247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.268264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.277361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.277379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.286487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.286505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.295380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.295398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.304638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.304655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.313308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.313327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.322387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.322404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.331496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.331514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.340657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.340676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.349547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.349566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.358644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.358663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.367773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.367792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.376919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.376937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.385939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.385958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.395190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.395213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.404432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.404450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.412654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.412672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.421787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.421806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.431043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.431061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.440282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.440300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.448333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.448351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.457559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.457577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.466181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.898 [2024-11-06 12:18:50.466199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.898 [2024-11-06 12:18:50.475381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.899 [2024-11-06 12:18:50.475399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.899 [2024-11-06 12:18:50.484532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.899 [2024-11-06 12:18:50.484550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.899 [2024-11-06 12:18:50.493634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.899 [2024-11-06 12:18:50.493651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.899 [2024-11-06 12:18:50.502379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.899 [2024-11-06 12:18:50.502397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.899 [2024-11-06 12:18:50.511184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.899 [2024-11-06 12:18:50.511202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.158 [2024-11-06 12:18:50.519829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.158 [2024-11-06 12:18:50.519847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.158 [2024-11-06 12:18:50.528579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.158 [2024-11-06 12:18:50.528597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.537297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.537314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.546403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.546421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.555170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.555188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.564477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.564497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.574277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.574295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.583010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.583028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.592221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.592239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.601485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.601502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.610600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.610618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.619839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.619857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.628861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.628879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.638119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.638137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.647283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.647302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.656323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.656341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.665106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.665123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.674298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.674316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.683384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.683402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.692056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.692074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.701179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.701197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.710418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.710436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.719527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.719545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.728716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.728735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.737077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.737095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.745802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.745819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.754886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.754903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.763857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.763875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.159 [2024-11-06 12:18:50.772559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.159 [2024-11-06 12:18:50.772577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.781244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.781262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.789924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.789943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.798507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.798525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 18400.00 IOPS, 143.75 MiB/s [2024-11-06T11:18:51.034Z] [2024-11-06 12:18:50.807361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.807378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.813081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.813097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 00:12:19.419 Latency(us) 00:12:19.419 [2024-11-06T11:18:51.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.419 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:19.419 Nvme1n1 : 5.01 18397.84 143.73 0.00 0.00 6949.81 2532.07 11915.64 00:12:19.419 [2024-11-06T11:18:51.034Z] =================================================================================================================== 00:12:19.419 [2024-11-06T11:18:51.034Z] Total : 18397.84 143.73 0.00 0.00 6949.81 2532.07 11915.64 00:12:19.419 [2024-11-06 12:18:50.821099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.821113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.829118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.829131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.837141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.837151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.845170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.845187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.853186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.853200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.861205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.861217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.869227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.869238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.877248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.877259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.885271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.885284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.893301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.893320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.901313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.901324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.909335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.909348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.917357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.917369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.925377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.925388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.933398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.933407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.941420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.941429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.949444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.949456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.957466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.957477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.965485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.965510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.973506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.973515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 [2024-11-06 12:18:50.981526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.419 [2024-11-06 12:18:50.981535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (46504) - No such process 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 46504 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.419 12:18:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:19.419 delay0 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.419 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:19.678 [2024-11-06 12:18:51.155604] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:26.241 Initializing NVMe Controllers 00:12:26.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:26.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:26.241 Initialization complete. Launching workers. 00:12:26.241 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 16166 00:12:26.241 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16331, failed to submit 101 00:12:26.241 success 16234, unsuccessful 97, failed 0 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.241 rmmod nvme_tcp 00:12:26.241 rmmod nvme_fabrics 00:12:26.241 rmmod nvme_keyring 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 44483 ']' 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 44483 ']' 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 44483' 00:12:26.241 killing process with pid 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 44483 00:12:26.241 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.242 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.780 00:12:28.780 real 0m31.223s 00:12:28.780 user 0m41.925s 00:12:28.780 sys 0m10.384s 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:28.780 ************************************ 00:12:28.780 END TEST nvmf_zcopy 00:12:28.780 ************************************ 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:28.780 ************************************ 00:12:28.780 START TEST nvmf_nmic 00:12:28.780 ************************************ 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:28.780 * Looking for test storage... 00:12:28.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:28.780 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:28.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.780 --rc genhtml_branch_coverage=1 00:12:28.780 --rc genhtml_function_coverage=1 00:12:28.780 --rc genhtml_legend=1 00:12:28.780 --rc geninfo_all_blocks=1 00:12:28.780 --rc geninfo_unexecuted_blocks=1 00:12:28.780 00:12:28.780 ' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:28.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.780 --rc genhtml_branch_coverage=1 00:12:28.780 --rc genhtml_function_coverage=1 00:12:28.780 --rc genhtml_legend=1 00:12:28.780 --rc geninfo_all_blocks=1 00:12:28.780 --rc geninfo_unexecuted_blocks=1 00:12:28.780 00:12:28.780 ' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:28.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.780 --rc genhtml_branch_coverage=1 00:12:28.780 --rc genhtml_function_coverage=1 00:12:28.780 --rc genhtml_legend=1 00:12:28.780 --rc geninfo_all_blocks=1 00:12:28.780 --rc geninfo_unexecuted_blocks=1 00:12:28.780 00:12:28.780 ' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:28.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.780 --rc genhtml_branch_coverage=1 00:12:28.780 --rc genhtml_function_coverage=1 00:12:28.780 --rc genhtml_legend=1 00:12:28.780 --rc geninfo_all_blocks=1 00:12:28.780 --rc geninfo_unexecuted_blocks=1 00:12:28.780 00:12:28.780 ' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.780 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.781 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.349 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.349 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.349 12:19:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.349 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.349 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.349 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:12:35.349 00:12:35.349 --- 10.0.0.2 ping statistics --- 00:12:35.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.349 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:35.349 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:12:35.350 00:12:35.350 --- 10.0.0.1 ping statistics --- 00:12:35.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.350 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=52342 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 52342 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 52342 ']' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 [2024-11-06 12:19:06.131839] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:12:35.350 [2024-11-06 12:19:06.131899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.350 [2024-11-06 12:19:06.233630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.350 [2024-11-06 12:19:06.283599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.350 [2024-11-06 12:19:06.283644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.350 [2024-11-06 12:19:06.283656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.350 [2024-11-06 12:19:06.283664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.350 [2024-11-06 12:19:06.283672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.350 [2024-11-06 12:19:06.285738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.350 [2024-11-06 12:19:06.285854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.350 [2024-11-06 12:19:06.285963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.350 [2024-11-06 12:19:06.285963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 [2024-11-06 12:19:06.429036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 Malloc0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 [2024-11-06 12:19:06.498750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:35.350 test case1: single bdev can't be used in multiple subsystems 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 [2024-11-06 12:19:06.526606] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:35.350 [2024-11-06 12:19:06.526638] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:35.350 [2024-11-06 12:19:06.526649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.350 request: 00:12:35.350 { 00:12:35.350 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:35.350 "namespace": { 00:12:35.350 "bdev_name": "Malloc0", 00:12:35.350 "no_auto_visible": false 00:12:35.350 }, 00:12:35.350 "method": "nvmf_subsystem_add_ns", 00:12:35.350 "req_id": 1 00:12:35.350 } 00:12:35.350 Got JSON-RPC error response 00:12:35.350 response: 00:12:35.350 { 00:12:35.350 "code": -32602, 00:12:35.350 "message": "Invalid parameters" 00:12:35.350 } 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:35.350 Adding namespace failed - expected result. 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:35.350 test case2: host connect to nvmf target in multiple paths 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.350 [2024-11-06 12:19:06.538776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.350 12:19:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.287 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:37.664 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.664 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:37.664 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.664 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:37.664 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:40.195 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:40.195 [global] 00:12:40.195 thread=1 00:12:40.195 invalidate=1 00:12:40.195 rw=write 00:12:40.195 time_based=1 00:12:40.195 runtime=1 00:12:40.196 ioengine=libaio 00:12:40.196 direct=1 00:12:40.196 bs=4096 00:12:40.196 iodepth=1 00:12:40.196 norandommap=0 00:12:40.196 numjobs=1 00:12:40.196 00:12:40.196 verify_dump=1 00:12:40.196 verify_backlog=512 00:12:40.196 verify_state_save=0 00:12:40.196 do_verify=1 00:12:40.196 verify=crc32c-intel 00:12:40.196 [job0] 00:12:40.196 filename=/dev/nvme0n1 00:12:40.196 Could not set queue depth (nvme0n1) 00:12:40.196 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.196 fio-3.35 00:12:40.196 Starting 1 thread 00:12:41.573 00:12:41.573 job0: (groupid=0, jobs=1): err= 0: pid=53567: Wed Nov 6 12:19:12 2024 00:12:41.573 read: IOPS=519, BW=2079KiB/s (2129kB/s)(2112KiB/1016msec) 00:12:41.573 slat (nsec): min=7584, max=39457, avg=9293.79, stdev=3282.27 00:12:41.573 clat (usec): min=210, max=41066, avg=1489.16, stdev=6981.44 00:12:41.573 lat (usec): min=218, max=41088, avg=1498.46, stdev=6983.59 00:12:41.573 clat percentiles (usec): 00:12:41.573 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:12:41.573 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:12:41.573 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 306], 95.00th=[ 424], 00:12:41.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:41.573 | 99.99th=[41157] 00:12:41.573 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:12:41.573 slat (usec): min=11, max=27145, avg=39.48, stdev=847.91 00:12:41.573 clat (usec): min=118, max=336, avg=175.51, stdev=11.87 00:12:41.573 lat (usec): min=161, max=27390, avg=215.00, stdev=850.17 00:12:41.573 clat percentiles (usec): 00:12:41.573 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:12:41.573 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 176], 00:12:41.573 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 190], 00:12:41.573 | 99.00th=[ 212], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 338], 00:12:41.573 | 99.99th=[ 338] 00:12:41.573 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:12:41.573 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:41.573 lat (usec) : 250=89.11%, 500=9.79%, 750=0.06% 00:12:41.573 lat (msec) : 50=1.03% 00:12:41.573 cpu : usr=0.99%, sys=1.77%, ctx=1556, majf=0, minf=1 00:12:41.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.573 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.574 00:12:41.574 Run status group 0 (all jobs): 00:12:41.574 READ: bw=2079KiB/s (2129kB/s), 2079KiB/s-2079KiB/s (2129kB/s-2129kB/s), io=2112KiB (2163kB), run=1016-1016msec 00:12:41.574 WRITE: bw=4031KiB/s (4128kB/s), 4031KiB/s-4031KiB/s (4128kB/s-4128kB/s), io=4096KiB (4194kB), run=1016-1016msec 00:12:41.574 00:12:41.574 Disk stats (read/write): 00:12:41.574 nvme0n1: ios=551/1024, merge=0/0, ticks=1647/177, in_queue=1824, util=98.80% 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.574 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.574 rmmod nvme_tcp 00:12:41.574 rmmod nvme_fabrics 00:12:41.574 rmmod nvme_keyring 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 52342 ']' 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 52342 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 52342 ']' 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 52342 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 52342 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 52342' 00:12:41.574 killing process with pid 52342 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 52342 00:12:41.574 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 52342 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.952 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.881 00:12:43.881 real 0m15.552s 00:12:43.881 user 0m40.271s 00:12:43.881 sys 0m5.489s 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.881 ************************************ 00:12:43.881 END TEST nvmf_nmic 00:12:43.881 ************************************ 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.881 ************************************ 00:12:43.881 START TEST nvmf_fio_target 00:12:43.881 ************************************ 00:12:43.881 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:44.140 * Looking for test storage... 00:12:44.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.140 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.141 --rc genhtml_branch_coverage=1 00:12:44.141 --rc genhtml_function_coverage=1 00:12:44.141 --rc genhtml_legend=1 00:12:44.141 --rc geninfo_all_blocks=1 00:12:44.141 --rc geninfo_unexecuted_blocks=1 00:12:44.141 00:12:44.141 ' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.141 --rc genhtml_branch_coverage=1 00:12:44.141 --rc genhtml_function_coverage=1 00:12:44.141 --rc genhtml_legend=1 00:12:44.141 --rc geninfo_all_blocks=1 00:12:44.141 --rc geninfo_unexecuted_blocks=1 00:12:44.141 00:12:44.141 ' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.141 --rc genhtml_branch_coverage=1 00:12:44.141 --rc genhtml_function_coverage=1 00:12:44.141 --rc genhtml_legend=1 00:12:44.141 --rc geninfo_all_blocks=1 00:12:44.141 --rc geninfo_unexecuted_blocks=1 00:12:44.141 00:12:44.141 ' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:44.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.141 --rc genhtml_branch_coverage=1 00:12:44.141 --rc genhtml_function_coverage=1 00:12:44.141 --rc genhtml_legend=1 00:12:44.141 --rc geninfo_all_blocks=1 00:12:44.141 --rc geninfo_unexecuted_blocks=1 00:12:44.141 00:12:44.141 ' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.141 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:49.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:49.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.410 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:49.411 Found net devices under 0000:af:00.0: cvl_0_0 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:49.411 12:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:49.411 Found net devices under 0000:af:00.1: cvl_0_1 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.411 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:49.670 00:12:49.670 --- 10.0.0.2 ping statistics --- 00:12:49.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.670 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:12:49.670 00:12:49.670 --- 10.0.0.1 ping statistics --- 00:12:49.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.670 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.670 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=57561 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 57561 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 57561 ']' 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:49.929 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.929 [2024-11-06 12:19:21.353932] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:12:49.929 [2024-11-06 12:19:21.353972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.929 [2024-11-06 12:19:21.439594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.929 [2024-11-06 12:19:21.490543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.929 [2024-11-06 12:19:21.490585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.929 [2024-11-06 12:19:21.490596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.929 [2024-11-06 12:19:21.490606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.930 [2024-11-06 12:19:21.490613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.930 [2024-11-06 12:19:21.492663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.930 [2024-11-06 12:19:21.492772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.930 [2024-11-06 12:19:21.492876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.930 [2024-11-06 12:19:21.492877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.188 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:50.447 [2024-11-06 12:19:21.888486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.447 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.706 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:50.706 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.965 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:50.965 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.224 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:51.224 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.482 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:51.483 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:51.741 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.000 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:52.000 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.260 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:52.260 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.518 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:52.518 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:52.777 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.035 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:53.035 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.294 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:53.294 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.553 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.812 [2024-11-06 12:19:25.276128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.812 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:54.071 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:54.329 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:55.710 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:58.244 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:58.244 [global] 00:12:58.244 thread=1 00:12:58.244 invalidate=1 00:12:58.244 rw=write 00:12:58.244 time_based=1 00:12:58.244 runtime=1 00:12:58.244 ioengine=libaio 00:12:58.244 direct=1 00:12:58.244 bs=4096 00:12:58.244 iodepth=1 00:12:58.244 norandommap=0 00:12:58.244 numjobs=1 00:12:58.244 00:12:58.244 verify_dump=1 00:12:58.244 verify_backlog=512 00:12:58.244 verify_state_save=0 00:12:58.244 do_verify=1 00:12:58.244 verify=crc32c-intel 00:12:58.244 [job0] 00:12:58.244 filename=/dev/nvme0n1 00:12:58.244 [job1] 00:12:58.244 filename=/dev/nvme0n2 00:12:58.244 [job2] 00:12:58.244 filename=/dev/nvme0n3 00:12:58.244 [job3] 00:12:58.244 filename=/dev/nvme0n4 00:12:58.244 Could not set queue depth (nvme0n1) 00:12:58.244 Could not set queue depth (nvme0n2) 00:12:58.244 Could not set queue depth (nvme0n3) 00:12:58.244 Could not set queue depth (nvme0n4) 00:12:58.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.244 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.245 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.245 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.245 fio-3.35 00:12:58.245 Starting 4 threads 00:12:59.645 00:12:59.645 job0: (groupid=0, jobs=1): err= 0: pid=59129: Wed Nov 6 12:19:30 2024 00:12:59.645 read: IOPS=268, BW=1073KiB/s (1098kB/s)(1108KiB/1033msec) 00:12:59.645 slat (nsec): min=7424, max=35196, avg=11252.07, stdev=4830.50 00:12:59.645 clat (usec): min=197, max=41199, avg=3267.74, stdev=10541.80 00:12:59.645 lat (usec): min=206, max=41208, avg=3278.99, stdev=10544.93 00:12:59.645 clat percentiles (usec): 00:12:59.645 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 239], 00:12:59.645 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 285], 60.00th=[ 322], 00:12:59.645 | 70.00th=[ 461], 80.00th=[ 490], 90.00th=[ 523], 95.00th=[41157], 00:12:59.645 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:59.645 | 99.99th=[41157] 00:12:59.645 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:12:59.645 slat (nsec): min=10884, max=45229, avg=14283.14, stdev=3612.69 00:12:59.645 clat (usec): min=126, max=369, avg=222.02, stdev=54.21 00:12:59.645 lat (usec): min=142, max=383, avg=236.30, stdev=54.33 00:12:59.645 clat percentiles (usec): 00:12:59.645 | 1.00th=[ 137], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 176], 00:12:59.645 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 219], 00:12:59.645 | 70.00th=[ 237], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 00:12:59.645 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 371], 00:12:59.645 | 99.99th=[ 371] 00:12:59.645 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.645 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.645 lat (usec) : 250=59.06%, 500=35.23%, 750=2.92% 00:12:59.645 lat (msec) : 2=0.25%, 50=2.53% 00:12:59.645 cpu : usr=0.68%, sys=1.45%, ctx=791, majf=0, minf=1 00:12:59.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.645 issued rwts: total=277,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.645 job1: (groupid=0, jobs=1): err= 0: pid=59144: Wed Nov 6 12:19:30 2024 00:12:59.645 read: IOPS=136, BW=547KiB/s (560kB/s)(564KiB/1031msec) 00:12:59.645 slat (nsec): min=7789, max=51456, avg=11238.48, stdev=6421.10 00:12:59.645 clat (usec): min=202, max=41310, avg=6476.60, stdev=14481.72 00:12:59.645 lat (usec): min=211, max=41320, avg=6487.84, stdev=14486.20 00:12:59.645 clat percentiles (usec): 00:12:59.645 | 1.00th=[ 212], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 297], 00:12:59.645 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 506], 00:12:59.645 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[41157], 95.00th=[41157], 00:12:59.645 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:59.645 | 99.99th=[41157] 00:12:59.645 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:12:59.645 slat (nsec): min=5662, max=47200, avg=12380.04, stdev=2557.25 00:12:59.645 clat (usec): min=140, max=406, avg=207.03, stdev=36.75 00:12:59.645 lat (usec): min=152, max=433, avg=219.41, stdev=36.90 00:12:59.645 clat percentiles (usec): 00:12:59.645 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:12:59.645 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:12:59.645 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 249], 95.00th=[ 262], 00:12:59.645 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 408], 99.95th=[ 408], 00:12:59.645 | 99.99th=[ 408] 00:12:59.645 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.645 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.645 lat (usec) : 250=73.35%, 500=16.85%, 750=6.28% 00:12:59.645 lat (msec) : 2=0.31%, 50=3.22% 00:12:59.645 cpu : usr=0.49%, sys=1.17%, ctx=655, majf=0, minf=1 00:12:59.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.645 issued rwts: total=141,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.645 job2: (groupid=0, jobs=1): err= 0: pid=59169: Wed Nov 6 12:19:30 2024 00:12:59.645 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:12:59.645 slat (nsec): min=10716, max=25633, avg=23263.77, stdev=2894.66 00:12:59.645 clat (usec): min=40804, max=41131, avg=40977.23, stdev=74.94 00:12:59.645 lat (usec): min=40829, max=41156, avg=41000.49, stdev=74.21 00:12:59.645 clat percentiles (usec): 00:12:59.645 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:59.645 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:59.645 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:59.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:59.646 | 99.99th=[41157] 00:12:59.646 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:59.646 slat (usec): min=10, max=797, avg=14.64, stdev=34.79 00:12:59.646 clat (usec): min=144, max=249, avg=172.15, stdev=11.93 00:12:59.646 lat (usec): min=155, max=977, avg=186.79, stdev=37.21 00:12:59.646 clat percentiles (usec): 00:12:59.646 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:12:59.646 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:12:59.646 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:12:59.646 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 251], 99.95th=[ 251], 00:12:59.646 | 99.99th=[ 251] 00:12:59.646 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.646 lat (usec) : 250=95.88% 00:12:59.646 lat (msec) : 50=4.12% 00:12:59.646 cpu : usr=0.70%, sys=0.70%, ctx=536, majf=0, minf=1 00:12:59.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.646 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.646 job3: (groupid=0, jobs=1): err= 0: pid=59178: Wed Nov 6 12:19:30 2024 00:12:59.646 read: IOPS=241, BW=965KiB/s (988kB/s)(996KiB/1032msec) 00:12:59.646 slat (nsec): min=7639, max=24963, avg=9962.89, stdev=3778.18 00:12:59.646 clat (usec): min=218, max=41028, avg=3644.65, stdev=11049.20 00:12:59.646 lat (usec): min=227, max=41050, avg=3654.61, stdev=11052.57 00:12:59.646 clat percentiles (usec): 00:12:59.646 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 265], 20.00th=[ 289], 00:12:59.646 | 30.00th=[ 310], 40.00th=[ 334], 50.00th=[ 400], 60.00th=[ 441], 00:12:59.646 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 553], 95.00th=[41157], 00:12:59.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:59.646 | 99.99th=[41157] 00:12:59.646 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:12:59.646 slat (nsec): min=11234, max=37535, avg=12920.71, stdev=2282.09 00:12:59.646 clat (usec): min=136, max=1330, avg=218.95, stdev=79.94 00:12:59.646 lat (usec): min=147, max=1344, avg=231.87, stdev=80.23 00:12:59.646 clat percentiles (usec): 00:12:59.646 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 186], 00:12:59.646 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 215], 00:12:59.646 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 255], 95.00th=[ 322], 00:12:59.646 | 99.00th=[ 404], 99.50th=[ 873], 99.90th=[ 1336], 99.95th=[ 1336], 00:12:59.646 | 99.99th=[ 1336] 00:12:59.646 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:12:59.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:59.646 lat (usec) : 250=60.71%, 500=32.33%, 750=3.68%, 1000=0.39% 00:12:59.646 lat (msec) : 2=0.26%, 50=2.63% 00:12:59.646 cpu : usr=0.97%, sys=0.97%, ctx=762, majf=0, minf=1 00:12:59.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.646 issued rwts: total=249,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.646 00:12:59.646 Run status group 0 (all jobs): 00:12:59.646 READ: bw=2668KiB/s (2732kB/s), 87.9KiB/s-1073KiB/s (90.0kB/s-1098kB/s), io=2756KiB (2822kB), run=1001-1033msec 00:12:59.646 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2046KiB/s (2030kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1033msec 00:12:59.646 00:12:59.646 Disk stats (read/write): 00:12:59.646 nvme0n1: ios=297/512, merge=0/0, ticks=1642/110, in_queue=1752, util=99.50% 00:12:59.646 nvme0n2: ios=161/512, merge=0/0, ticks=1695/102, in_queue=1797, util=99.80% 00:12:59.646 nvme0n3: ios=72/512, merge=0/0, ticks=873/75, in_queue=948, util=99.89% 00:12:59.646 nvme0n4: ios=267/512, merge=0/0, ticks=1605/103, in_queue=1708, util=99.78% 00:12:59.646 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:59.646 [global] 00:12:59.646 thread=1 00:12:59.646 invalidate=1 00:12:59.646 rw=randwrite 00:12:59.646 time_based=1 00:12:59.646 runtime=1 00:12:59.646 ioengine=libaio 00:12:59.646 direct=1 00:12:59.646 bs=4096 00:12:59.646 iodepth=1 00:12:59.646 norandommap=0 00:12:59.646 numjobs=1 00:12:59.646 00:12:59.646 verify_dump=1 00:12:59.646 verify_backlog=512 00:12:59.646 verify_state_save=0 00:12:59.646 do_verify=1 00:12:59.646 verify=crc32c-intel 00:12:59.646 [job0] 00:12:59.646 filename=/dev/nvme0n1 00:12:59.646 [job1] 00:12:59.646 filename=/dev/nvme0n2 00:12:59.646 [job2] 00:12:59.646 filename=/dev/nvme0n3 00:12:59.646 [job3] 00:12:59.646 filename=/dev/nvme0n4 00:12:59.646 Could not set queue depth (nvme0n1) 00:12:59.646 Could not set queue depth (nvme0n2) 00:12:59.646 Could not set queue depth (nvme0n3) 00:12:59.646 Could not set queue depth (nvme0n4) 00:12:59.905 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.905 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.905 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.905 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.905 fio-3.35 00:12:59.905 Starting 4 threads 00:13:01.295 00:13:01.295 job0: (groupid=0, jobs=1): err= 0: pid=59646: Wed Nov 6 12:19:32 2024 00:13:01.295 read: IOPS=2402, BW=9610KiB/s (9841kB/s)(9620KiB/1001msec) 00:13:01.295 slat (nsec): min=6345, max=43964, avg=7595.71, stdev=1371.62 00:13:01.296 clat (usec): min=155, max=511, avg=235.90, stdev=30.52 00:13:01.296 lat (usec): min=162, max=519, avg=243.49, stdev=30.54 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 200], 20.00th=[ 215], 00:13:01.296 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:13:01.296 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:13:01.296 | 99.00th=[ 289], 99.50th=[ 412], 99.90th=[ 486], 99.95th=[ 498], 00:13:01.296 | 99.99th=[ 510] 00:13:01.296 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:01.296 slat (nsec): min=8857, max=46904, avg=10415.06, stdev=1816.81 00:13:01.296 clat (usec): min=98, max=324, avg=146.58, stdev=35.32 00:13:01.296 lat (usec): min=107, max=356, avg=157.00, stdev=35.88 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 121], 00:13:01.296 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:13:01.296 | 70.00th=[ 149], 80.00th=[ 163], 90.00th=[ 208], 95.00th=[ 219], 00:13:01.296 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 318], 00:13:01.296 | 99.99th=[ 326] 00:13:01.296 bw ( KiB/s): min=11568, max=11568, per=73.01%, avg=11568.00, stdev= 0.00, samples=1 00:13:01.296 iops : min= 2892, max= 2892, avg=2892.00, stdev= 0.00, samples=1 00:13:01.296 lat (usec) : 100=0.04%, 250=84.87%, 500=15.07%, 750=0.02% 00:13:01.296 cpu : usr=3.40%, sys=5.50%, ctx=4965, majf=0, minf=1 00:13:01.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 issued rwts: total=2405,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:01.296 job1: (groupid=0, jobs=1): err= 0: pid=59657: Wed Nov 6 12:19:32 2024 00:13:01.296 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:13:01.296 slat (nsec): min=10548, max=23518, avg=22494.77, stdev=2673.66 00:13:01.296 clat (usec): min=40898, max=41998, avg=41373.48, stdev=498.84 00:13:01.296 lat (usec): min=40909, max=42021, avg=41395.97, stdev=499.43 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:01.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:01.296 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:01.296 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:01.296 | 99.99th=[42206] 00:13:01.296 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:13:01.296 slat (nsec): min=9709, max=58714, avg=11972.08, stdev=2862.77 00:13:01.296 clat (usec): min=119, max=384, avg=223.84, stdev=48.92 00:13:01.296 lat (usec): min=131, max=395, avg=235.82, stdev=49.30 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 131], 5.00th=[ 147], 10.00th=[ 169], 20.00th=[ 182], 00:13:01.296 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 219], 60.00th=[ 237], 00:13:01.296 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 302], 00:13:01.296 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 383], 00:13:01.296 | 99.99th=[ 383] 00:13:01.296 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:01.296 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:01.296 lat (usec) : 250=66.48%, 500=29.40% 00:13:01.296 lat (msec) : 50=4.12% 00:13:01.296 cpu : usr=0.39%, sys=0.48%, ctx=537, majf=0, minf=1 00:13:01.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:01.296 job2: (groupid=0, jobs=1): err= 0: pid=59665: Wed Nov 6 12:19:32 2024 00:13:01.296 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:13:01.296 slat (nsec): min=9513, max=23634, avg=22725.95, stdev=2955.24 00:13:01.296 clat (usec): min=40892, max=42087, avg=41340.50, stdev=502.86 00:13:01.296 lat (usec): min=40916, max=42096, avg=41363.23, stdev=501.92 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:01.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:01.296 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:01.296 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:01.296 | 99.99th=[42206] 00:13:01.296 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:13:01.296 slat (nsec): min=9684, max=42578, avg=10740.71, stdev=1691.88 00:13:01.296 clat (usec): min=139, max=492, avg=202.83, stdev=28.06 00:13:01.296 lat (usec): min=149, max=501, avg=213.57, stdev=28.29 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 186], 00:13:01.296 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:13:01.296 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 239], 00:13:01.296 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 494], 99.95th=[ 494], 00:13:01.296 | 99.99th=[ 494] 00:13:01.296 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:01.296 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:01.296 lat (usec) : 250=93.45%, 500=2.43% 00:13:01.296 lat (msec) : 50=4.12% 00:13:01.296 cpu : usr=0.20%, sys=0.59%, ctx=536, majf=0, minf=1 00:13:01.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:01.296 job3: (groupid=0, jobs=1): err= 0: pid=59671: Wed Nov 6 12:19:32 2024 00:13:01.296 read: IOPS=27, BW=110KiB/s (113kB/s)(112KiB/1017msec) 00:13:01.296 slat (nsec): min=7242, max=24280, avg=19416.96, stdev=6844.19 00:13:01.296 clat (usec): min=266, max=42050, avg=32469.12, stdev=17112.20 00:13:01.296 lat (usec): min=273, max=42074, avg=32488.53, stdev=17118.31 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 351], 00:13:01.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:01.296 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:01.296 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:01.296 | 99.99th=[42206] 00:13:01.296 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:13:01.296 slat (nsec): min=9343, max=40185, avg=10332.79, stdev=1605.86 00:13:01.296 clat (usec): min=144, max=387, avg=197.10, stdev=22.78 00:13:01.296 lat (usec): min=156, max=427, avg=207.43, stdev=23.29 00:13:01.296 clat percentiles (usec): 00:13:01.296 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:13:01.296 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:13:01.296 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 227], 95.00th=[ 241], 00:13:01.296 | 99.00th=[ 255], 99.50th=[ 281], 99.90th=[ 388], 99.95th=[ 388], 00:13:01.296 | 99.99th=[ 388] 00:13:01.296 bw ( KiB/s): min= 4096, max= 4096, per=25.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:01.296 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:01.296 lat (usec) : 250=93.52%, 500=2.41% 00:13:01.296 lat (msec) : 50=4.07% 00:13:01.296 cpu : usr=0.39%, sys=0.39%, ctx=540, majf=0, minf=1 00:13:01.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.296 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:01.296 00:13:01.296 Run status group 0 (all jobs): 00:13:01.296 READ: bw=9582KiB/s (9812kB/s), 85.1KiB/s-9610KiB/s (87.1kB/s-9841kB/s), io=9908KiB (10.1MB), run=1001-1034msec 00:13:01.296 WRITE: bw=15.5MiB/s (16.2MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1034msec 00:13:01.296 00:13:01.296 Disk stats (read/write): 00:13:01.296 nvme0n1: ios=2098/2094, merge=0/0, ticks=504/296, in_queue=800, util=86.77% 00:13:01.296 nvme0n2: ios=44/512, merge=0/0, ticks=1689/112, in_queue=1801, util=98.17% 00:13:01.296 nvme0n3: ios=43/512, merge=0/0, ticks=1687/106, in_queue=1793, util=97.91% 00:13:01.296 nvme0n4: ios=23/512, merge=0/0, ticks=704/101, in_queue=805, util=89.70% 00:13:01.296 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:01.296 [global] 00:13:01.296 thread=1 00:13:01.296 invalidate=1 00:13:01.296 rw=write 00:13:01.296 time_based=1 00:13:01.296 runtime=1 00:13:01.296 ioengine=libaio 00:13:01.296 direct=1 00:13:01.296 bs=4096 00:13:01.296 iodepth=128 00:13:01.296 norandommap=0 00:13:01.296 numjobs=1 00:13:01.296 00:13:01.296 verify_dump=1 00:13:01.296 verify_backlog=512 00:13:01.296 verify_state_save=0 00:13:01.296 do_verify=1 00:13:01.296 verify=crc32c-intel 00:13:01.296 [job0] 00:13:01.296 filename=/dev/nvme0n1 00:13:01.296 [job1] 00:13:01.296 filename=/dev/nvme0n2 00:13:01.296 [job2] 00:13:01.296 filename=/dev/nvme0n3 00:13:01.296 [job3] 00:13:01.296 filename=/dev/nvme0n4 00:13:01.296 Could not set queue depth (nvme0n1) 00:13:01.296 Could not set queue depth (nvme0n2) 00:13:01.296 Could not set queue depth (nvme0n3) 00:13:01.296 Could not set queue depth (nvme0n4) 00:13:01.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.558 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.558 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.558 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.558 fio-3.35 00:13:01.558 Starting 4 threads 00:13:02.957 00:13:02.957 job0: (groupid=0, jobs=1): err= 0: pid=60134: Wed Nov 6 12:19:34 2024 00:13:02.957 read: IOPS=4546, BW=17.8MiB/s (18.6MB/s)(18.1MiB/1017msec) 00:13:02.957 slat (nsec): min=1539, max=15193k, avg=94765.87, stdev=639722.97 00:13:02.957 clat (usec): min=5145, max=53772, avg=12500.54, stdev=5125.98 00:13:02.957 lat (usec): min=5153, max=53778, avg=12595.31, stdev=5159.42 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 8586], 20.00th=[ 9503], 00:13:02.957 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11076], 60.00th=[11863], 00:13:02.957 | 70.00th=[13435], 80.00th=[14615], 90.00th=[18220], 95.00th=[20579], 00:13:02.957 | 99.00th=[36963], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:13:02.957 | 99.99th=[53740] 00:13:02.957 write: IOPS=5034, BW=19.7MiB/s (20.6MB/s)(20.0MiB/1017msec); 0 zone resets 00:13:02.957 slat (usec): min=2, max=18133, avg=101.35, stdev=720.24 00:13:02.957 clat (usec): min=314, max=57625, avg=13916.39, stdev=10512.95 00:13:02.957 lat (usec): min=323, max=57632, avg=14017.74, stdev=10572.90 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 848], 5.00th=[ 1942], 10.00th=[ 7046], 20.00th=[ 9241], 00:13:02.957 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10945], 00:13:02.957 | 70.00th=[11731], 80.00th=[13829], 90.00th=[34341], 95.00th=[39584], 00:13:02.957 | 99.00th=[52167], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:13:02.957 | 99.99th=[57410] 00:13:02.957 bw ( KiB/s): min=16064, max=24000, per=28.75%, avg=20032.00, stdev=5611.60, samples=2 00:13:02.957 iops : min= 4016, max= 6000, avg=5008.00, stdev=1402.90, samples=2 00:13:02.957 lat (usec) : 500=0.11%, 750=0.03%, 1000=1.33% 00:13:02.957 lat (msec) : 2=1.41%, 4=0.43%, 10=36.15%, 20=48.27%, 50=11.68% 00:13:02.957 lat (msec) : 100=0.60% 00:13:02.957 cpu : usr=3.84%, sys=4.53%, ctx=437, majf=0, minf=1 00:13:02.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:02.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.957 issued rwts: total=4624,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.957 job1: (groupid=0, jobs=1): err= 0: pid=60149: Wed Nov 6 12:19:34 2024 00:13:02.957 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:02.957 slat (nsec): min=1568, max=14062k, avg=108305.81, stdev=744944.14 00:13:02.957 clat (usec): min=4280, max=34444, avg=14063.13, stdev=5227.83 00:13:02.957 lat (usec): min=4286, max=34454, avg=14171.43, stdev=5286.76 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 5014], 5.00th=[ 6849], 10.00th=[ 7898], 20.00th=[ 9634], 00:13:02.957 | 30.00th=[10552], 40.00th=[11863], 50.00th=[13304], 60.00th=[14746], 00:13:02.957 | 70.00th=[17433], 80.00th=[18220], 90.00th=[20055], 95.00th=[25297], 00:13:02.957 | 99.00th=[28967], 99.50th=[30278], 99.90th=[31589], 99.95th=[32113], 00:13:02.957 | 99.99th=[34341] 00:13:02.957 write: IOPS=4108, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:13:02.957 slat (usec): min=2, max=24426, avg=126.02, stdev=724.12 00:13:02.957 clat (usec): min=1112, max=68843, avg=16921.04, stdev=14518.33 00:13:02.957 lat (usec): min=1122, max=69845, avg=17047.07, stdev=14607.92 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 3294], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 8455], 00:13:02.957 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[11469], 60.00th=[11863], 00:13:02.957 | 70.00th=[14746], 80.00th=[22152], 90.00th=[38536], 95.00th=[53216], 00:13:02.957 | 99.00th=[65274], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:13:02.957 | 99.99th=[68682] 00:13:02.957 bw ( KiB/s): min=16384, max=16384, per=23.51%, avg=16384.00, stdev= 0.00, samples=2 00:13:02.957 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:02.957 lat (msec) : 2=0.21%, 4=0.94%, 10=34.17%, 20=47.89%, 50=13.57% 00:13:02.957 lat (msec) : 100=3.22% 00:13:02.957 cpu : usr=2.79%, sys=4.49%, ctx=512, majf=0, minf=1 00:13:02.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.957 issued rwts: total=4096,4125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.957 job2: (groupid=0, jobs=1): err= 0: pid=60174: Wed Nov 6 12:19:34 2024 00:13:02.957 read: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1017msec) 00:13:02.957 slat (nsec): min=1669, max=27477k, avg=135558.45, stdev=1005462.71 00:13:02.957 clat (usec): min=1424, max=35230, avg=17526.06, stdev=5290.03 00:13:02.957 lat (usec): min=5804, max=36152, avg=17661.62, stdev=5347.57 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 5932], 5.00th=[10945], 10.00th=[11731], 20.00th=[13173], 00:13:02.957 | 30.00th=[15139], 40.00th=[15795], 50.00th=[16319], 60.00th=[17433], 00:13:02.957 | 70.00th=[19792], 80.00th=[21627], 90.00th=[24773], 95.00th=[28967], 00:13:02.957 | 99.00th=[30802], 99.50th=[30802], 99.90th=[34341], 99.95th=[34866], 00:13:02.957 | 99.99th=[35390] 00:13:02.957 write: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec); 0 zone resets 00:13:02.957 slat (usec): min=2, max=18609, avg=110.71, stdev=757.42 00:13:02.957 clat (usec): min=1604, max=53515, avg=14879.90, stdev=5620.31 00:13:02.957 lat (usec): min=1616, max=53520, avg=14990.61, stdev=5661.60 00:13:02.957 clat percentiles (usec): 00:13:02.957 | 1.00th=[ 5735], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[10159], 00:13:02.957 | 30.00th=[11863], 40.00th=[13304], 50.00th=[15664], 60.00th=[15926], 00:13:02.957 | 70.00th=[16712], 80.00th=[17171], 90.00th=[20317], 95.00th=[24773], 00:13:02.957 | 99.00th=[31065], 99.50th=[38536], 99.90th=[46400], 99.95th=[46400], 00:13:02.957 | 99.99th=[53740] 00:13:02.957 bw ( KiB/s): min=16384, max=16384, per=23.51%, avg=16384.00, stdev= 0.00, samples=2 00:13:02.957 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:02.957 lat (msec) : 2=0.10%, 10=11.32%, 20=68.89%, 50=19.67%, 100=0.01% 00:13:02.958 cpu : usr=2.95%, sys=4.23%, ctx=329, majf=0, minf=1 00:13:02.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.958 issued rwts: total=3869,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.958 job3: (groupid=0, jobs=1): err= 0: pid=60182: Wed Nov 6 12:19:34 2024 00:13:02.958 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:13:02.958 slat (nsec): min=1642, max=22417k, avg=114830.63, stdev=807387.11 00:13:02.958 clat (usec): min=4259, max=50131, avg=15394.22, stdev=6700.99 00:13:02.958 lat (usec): min=4261, max=50156, avg=15509.05, stdev=6746.20 00:13:02.958 clat percentiles (usec): 00:13:02.958 | 1.00th=[ 4621], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10945], 00:13:02.958 | 30.00th=[11600], 40.00th=[12911], 50.00th=[13829], 60.00th=[14353], 00:13:02.958 | 70.00th=[16450], 80.00th=[20055], 90.00th=[23462], 95.00th=[29230], 00:13:02.958 | 99.00th=[38536], 99.50th=[40109], 99.90th=[40109], 99.95th=[43254], 00:13:02.958 | 99.99th=[50070] 00:13:02.958 write: IOPS=4348, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1006msec); 0 zone resets 00:13:02.958 slat (usec): min=2, max=13651, avg=115.26, stdev=654.87 00:13:02.958 clat (usec): min=1157, max=37342, avg=14703.98, stdev=5265.33 00:13:02.958 lat (usec): min=1218, max=37347, avg=14819.24, stdev=5300.98 00:13:02.958 clat percentiles (usec): 00:13:02.958 | 1.00th=[ 4686], 5.00th=[ 6194], 10.00th=[ 9241], 20.00th=[11207], 00:13:02.958 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13435], 60.00th=[14877], 00:13:02.958 | 70.00th=[15795], 80.00th=[18220], 90.00th=[23200], 95.00th=[25297], 00:13:02.958 | 99.00th=[27919], 99.50th=[28443], 99.90th=[37487], 99.95th=[37487], 00:13:02.958 | 99.99th=[37487] 00:13:02.958 bw ( KiB/s): min=16744, max=17240, per=24.39%, avg=16992.00, stdev=350.72, samples=2 00:13:02.958 iops : min= 4186, max= 4310, avg=4248.00, stdev=87.68, samples=2 00:13:02.958 lat (msec) : 2=0.01%, 4=0.48%, 10=13.29%, 20=67.23%, 50=18.97% 00:13:02.958 lat (msec) : 100=0.01% 00:13:02.958 cpu : usr=2.29%, sys=5.07%, ctx=412, majf=0, minf=1 00:13:02.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:02.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.958 issued rwts: total=4096,4375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.958 00:13:02.958 Run status group 0 (all jobs): 00:13:02.958 READ: bw=64.1MiB/s (67.2MB/s), 14.9MiB/s-17.8MiB/s (15.6MB/s-18.6MB/s), io=65.2MiB (68.3MB), run=1004-1017msec 00:13:02.958 WRITE: bw=68.0MiB/s (71.4MB/s), 15.7MiB/s-19.7MiB/s (16.5MB/s-20.6MB/s), io=69.2MiB (72.6MB), run=1004-1017msec 00:13:02.958 00:13:02.958 Disk stats (read/write): 00:13:02.958 nvme0n1: ios=4146/4198, merge=0/0, ticks=22775/29444, in_queue=52219, util=84.47% 00:13:02.958 nvme0n2: ios=3132/3583, merge=0/0, ticks=25557/44697, in_queue=70254, util=82.82% 00:13:02.958 nvme0n3: ios=3038/3072, merge=0/0, ticks=28393/27284, in_queue=55677, util=99.04% 00:13:02.958 nvme0n4: ios=3569/3584, merge=0/0, ticks=24788/23034, in_queue=47822, util=87.82% 00:13:02.958 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:02.958 [global] 00:13:02.958 thread=1 00:13:02.958 invalidate=1 00:13:02.958 rw=randwrite 00:13:02.958 time_based=1 00:13:02.958 runtime=1 00:13:02.958 ioengine=libaio 00:13:02.958 direct=1 00:13:02.958 bs=4096 00:13:02.958 iodepth=128 00:13:02.958 norandommap=0 00:13:02.958 numjobs=1 00:13:02.958 00:13:02.958 verify_dump=1 00:13:02.958 verify_backlog=512 00:13:02.958 verify_state_save=0 00:13:02.958 do_verify=1 00:13:02.958 verify=crc32c-intel 00:13:02.958 [job0] 00:13:02.958 filename=/dev/nvme0n1 00:13:02.958 [job1] 00:13:02.958 filename=/dev/nvme0n2 00:13:02.958 [job2] 00:13:02.958 filename=/dev/nvme0n3 00:13:02.958 [job3] 00:13:02.958 filename=/dev/nvme0n4 00:13:02.958 Could not set queue depth (nvme0n1) 00:13:02.958 Could not set queue depth (nvme0n2) 00:13:02.958 Could not set queue depth (nvme0n3) 00:13:02.958 Could not set queue depth (nvme0n4) 00:13:03.221 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.221 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.221 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.221 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.221 fio-3.35 00:13:03.221 Starting 4 threads 00:13:04.611 00:13:04.611 job0: (groupid=0, jobs=1): err= 0: pid=60622: Wed Nov 6 12:19:35 2024 00:13:04.611 read: IOPS=5601, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1004msec) 00:13:04.611 slat (nsec): min=1905, max=10871k, avg=99233.13, stdev=706569.73 00:13:04.611 clat (usec): min=1805, max=31877, avg=12259.23, stdev=3500.19 00:13:04.611 lat (usec): min=4041, max=31886, avg=12358.46, stdev=3541.71 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 4883], 5.00th=[ 8094], 10.00th=[ 9372], 20.00th=[10290], 00:13:04.611 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:13:04.611 | 70.00th=[12125], 80.00th=[14222], 90.00th=[17171], 95.00th=[19006], 00:13:04.611 | 99.00th=[24511], 99.50th=[30278], 99.90th=[31589], 99.95th=[31851], 00:13:04.611 | 99.99th=[31851] 00:13:04.611 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:13:04.611 slat (usec): min=3, max=2168, avg=69.24, stdev=199.16 00:13:04.611 clat (usec): min=1505, max=31842, avg=10286.86, stdev=2520.38 00:13:04.611 lat (usec): min=1519, max=31846, avg=10356.10, stdev=2534.32 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 3195], 5.00th=[ 4752], 10.00th=[ 6587], 20.00th=[ 9110], 00:13:04.611 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11207], 60.00th=[11469], 00:13:04.611 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:13:04.611 | 99.00th=[17695], 99.50th=[20841], 99.90th=[23725], 99.95th=[23725], 00:13:04.611 | 99.99th=[31851] 00:13:04.611 bw ( KiB/s): min=21456, max=23600, per=29.58%, avg=22528.00, stdev=1516.04, samples=2 00:13:04.611 iops : min= 5364, max= 5900, avg=5632.00, stdev=379.01, samples=2 00:13:04.611 lat (msec) : 2=0.18%, 4=1.40%, 10=23.39%, 20=73.09%, 50=1.94% 00:13:04.611 cpu : usr=3.99%, sys=7.18%, ctx=780, majf=0, minf=1 00:13:04.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:04.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.611 issued rwts: total=5624,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.611 job1: (groupid=0, jobs=1): err= 0: pid=60623: Wed Nov 6 12:19:35 2024 00:13:04.611 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec) 00:13:04.611 slat (nsec): min=1566, max=22119k, avg=110321.21, stdev=787130.03 00:13:04.611 clat (usec): min=4109, max=32995, avg=13243.30, stdev=4207.47 00:13:04.611 lat (usec): min=4117, max=33003, avg=13353.62, stdev=4246.74 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 5014], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10945], 00:13:04.611 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:13:04.611 | 70.00th=[13960], 80.00th=[16057], 90.00th=[18744], 95.00th=[20579], 00:13:04.611 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:13:04.611 | 99.99th=[32900] 00:13:04.611 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:13:04.611 slat (usec): min=2, max=38093, avg=82.64, stdev=727.46 00:13:04.611 clat (usec): min=1194, max=85095, avg=12332.95, stdev=9145.21 00:13:04.611 lat (usec): min=1204, max=85118, avg=12415.59, stdev=9212.76 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 3130], 5.00th=[ 5014], 10.00th=[ 6783], 20.00th=[ 8848], 00:13:04.611 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:13:04.611 | 70.00th=[11600], 80.00th=[11731], 90.00th=[15401], 95.00th=[19268], 00:13:04.611 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64226], 99.95th=[64750], 00:13:04.611 | 99.99th=[85459] 00:13:04.611 bw ( KiB/s): min=20416, max=20544, per=26.89%, avg=20480.00, stdev=90.51, samples=2 00:13:04.611 iops : min= 5104, max= 5136, avg=5120.00, stdev=22.63, samples=2 00:13:04.611 lat (msec) : 2=0.19%, 4=1.33%, 10=14.59%, 20=78.00%, 50=4.62% 00:13:04.611 lat (msec) : 100=1.27% 00:13:04.611 cpu : usr=2.99%, sys=4.98%, ctx=612, majf=0, minf=2 00:13:04.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:04.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.611 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.611 job2: (groupid=0, jobs=1): err= 0: pid=60624: Wed Nov 6 12:19:35 2024 00:13:04.611 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:13:04.611 slat (nsec): min=1709, max=16365k, avg=150472.45, stdev=1049806.14 00:13:04.611 clat (usec): min=10947, max=59004, avg=19888.23, stdev=5727.35 00:13:04.611 lat (usec): min=10953, max=59013, avg=20038.70, stdev=5796.36 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[11600], 5.00th=[14746], 10.00th=[16057], 20.00th=[16909], 00:13:04.611 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:13:04.611 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26084], 95.00th=[32113], 00:13:04.611 | 99.00th=[41681], 99.50th=[45351], 99.90th=[58983], 99.95th=[58983], 00:13:04.611 | 99.99th=[58983] 00:13:04.611 write: IOPS=3249, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1005msec); 0 zone resets 00:13:04.611 slat (usec): min=3, max=19554, avg=150.55, stdev=908.65 00:13:04.611 clat (usec): min=4128, max=66613, avg=20172.30, stdev=8029.75 00:13:04.611 lat (usec): min=5856, max=66628, avg=20322.86, stdev=8104.96 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[10290], 5.00th=[14222], 10.00th=[15926], 20.00th=[17171], 00:13:04.611 | 30.00th=[17695], 40.00th=[17957], 50.00th=[17957], 60.00th=[18482], 00:13:04.611 | 70.00th=[18744], 80.00th=[22414], 90.00th=[24773], 95.00th=[30802], 00:13:04.611 | 99.00th=[60556], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:13:04.611 | 99.99th=[66847] 00:13:04.611 bw ( KiB/s): min=11672, max=13440, per=16.48%, avg=12556.00, stdev=1250.16, samples=2 00:13:04.611 iops : min= 2918, max= 3360, avg=3139.00, stdev=312.54, samples=2 00:13:04.611 lat (msec) : 10=0.36%, 20=74.46%, 50=23.51%, 100=1.67% 00:13:04.611 cpu : usr=2.49%, sys=4.88%, ctx=333, majf=0, minf=1 00:13:04.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:04.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.611 issued rwts: total=3072,3266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.611 job3: (groupid=0, jobs=1): err= 0: pid=60625: Wed Nov 6 12:19:35 2024 00:13:04.611 read: IOPS=4959, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1004msec) 00:13:04.611 slat (nsec): min=1991, max=10042k, avg=103672.75, stdev=626859.60 00:13:04.611 clat (usec): min=2020, max=21398, avg=12628.68, stdev=2122.82 00:13:04.611 lat (usec): min=4815, max=21425, avg=12732.36, stdev=2172.10 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 5276], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11207], 00:13:04.611 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:13:04.611 | 70.00th=[13173], 80.00th=[13566], 90.00th=[15270], 95.00th=[16581], 00:13:04.611 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20317], 99.95th=[20579], 00:13:04.611 | 99.99th=[21365] 00:13:04.611 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:13:04.611 slat (usec): min=3, max=5908, avg=89.04, stdev=315.25 00:13:04.611 clat (usec): min=6414, max=19027, avg=12526.73, stdev=1643.97 00:13:04.611 lat (usec): min=6438, max=19241, avg=12615.77, stdev=1658.46 00:13:04.611 clat percentiles (usec): 00:13:04.611 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[10945], 20.00th=[11338], 00:13:04.611 | 30.00th=[11600], 40.00th=[12518], 50.00th=[12911], 60.00th=[13042], 00:13:04.611 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[15664], 00:13:04.611 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18482], 99.95th=[19006], 00:13:04.611 | 99.99th=[19006] 00:13:04.611 bw ( KiB/s): min=20480, max=20480, per=26.89%, avg=20480.00, stdev= 0.00, samples=2 00:13:04.611 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:04.611 lat (msec) : 4=0.01%, 10=7.49%, 20=92.43%, 50=0.07% 00:13:04.611 cpu : usr=3.79%, sys=5.88%, ctx=736, majf=0, minf=2 00:13:04.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:04.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.611 issued rwts: total=4979,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.611 00:13:04.611 Run status group 0 (all jobs): 00:13:04.611 READ: bw=72.0MiB/s (75.5MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-22.9MB/s), io=72.4MiB (75.9MB), run=1004-1005msec 00:13:04.611 WRITE: bw=74.4MiB/s (78.0MB/s), 12.7MiB/s-21.9MiB/s (13.3MB/s-23.0MB/s), io=74.8MiB (78.4MB), run=1004-1005msec 00:13:04.611 00:13:04.611 Disk stats (read/write): 00:13:04.611 nvme0n1: ios=4502/4608, merge=0/0, ticks=54803/47119, in_queue=101922, util=96.59% 00:13:04.611 nvme0n2: ios=4145/4181, merge=0/0, ticks=47079/44043, in_queue=91122, util=85.90% 00:13:04.611 nvme0n3: ios=2616/2977, merge=0/0, ticks=31164/33595, in_queue=64759, util=89.48% 00:13:04.611 nvme0n4: ios=4153/4199, merge=0/0, ticks=26346/24561, in_queue=50907, util=92.47% 00:13:04.612 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:04.612 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=60771 00:13:04.612 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:04.612 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:04.612 [global] 00:13:04.612 thread=1 00:13:04.612 invalidate=1 00:13:04.612 rw=read 00:13:04.612 time_based=1 00:13:04.612 runtime=10 00:13:04.612 ioengine=libaio 00:13:04.612 direct=1 00:13:04.612 bs=4096 00:13:04.612 iodepth=1 00:13:04.612 norandommap=1 00:13:04.612 numjobs=1 00:13:04.612 00:13:04.612 [job0] 00:13:04.612 filename=/dev/nvme0n1 00:13:04.612 [job1] 00:13:04.612 filename=/dev/nvme0n2 00:13:04.612 [job2] 00:13:04.612 filename=/dev/nvme0n3 00:13:04.612 [job3] 00:13:04.612 filename=/dev/nvme0n4 00:13:04.612 Could not set queue depth (nvme0n1) 00:13:04.612 Could not set queue depth (nvme0n2) 00:13:04.612 Could not set queue depth (nvme0n3) 00:13:04.612 Could not set queue depth (nvme0n4) 00:13:04.874 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.874 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.874 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.874 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:04.874 fio-3.35 00:13:04.874 Starting 4 threads 00:13:07.397 12:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:07.654 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:07.654 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:13:07.654 fio: pid=61053, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:07.911 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.911 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:07.911 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:13:07.911 fio: pid=61052, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:08.168 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:08.168 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:08.168 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=323584, buflen=4096 00:13:08.168 fio: pid=61049, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:08.426 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=348160, buflen=4096 00:13:08.426 fio: pid=61051, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:08.426 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:08.426 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:08.684 00:13:08.684 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=61049: Wed Nov 6 12:19:40 2024 00:13:08.684 read: IOPS=24, BW=96.8KiB/s (99.2kB/s)(316KiB/3263msec) 00:13:08.684 slat (usec): min=13, max=29749, avg=542.24, stdev=3559.76 00:13:08.684 clat (usec): min=341, max=41783, avg=40473.71, stdev=4574.54 00:13:08.684 lat (usec): min=378, max=70919, avg=41022.52, stdev=5874.77 00:13:08.684 clat percentiles (usec): 00:13:08.684 | 1.00th=[ 343], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:08.684 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:08.684 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:08.684 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:08.684 | 99.99th=[41681] 00:13:08.684 bw ( KiB/s): min= 90, max= 104, per=28.06%, avg=97.67, stdev= 5.43, samples=6 00:13:08.684 iops : min= 22, max= 26, avg=24.33, stdev= 1.51, samples=6 00:13:08.684 lat (usec) : 500=1.25% 00:13:08.684 lat (msec) : 50=97.50% 00:13:08.684 cpu : usr=0.12%, sys=0.00%, ctx=83, majf=0, minf=1 00:13:08.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.684 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.684 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.684 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=61051: Wed Nov 6 12:19:40 2024 00:13:08.684 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(340KiB/3517msec) 00:13:08.684 slat (usec): min=21, max=5706, avg=90.11, stdev=612.84 00:13:08.684 clat (usec): min=40822, max=42038, avg=41027.67, stdev=229.78 00:13:08.684 lat (usec): min=40844, max=47022, avg=41118.57, stdev=686.68 00:13:08.684 clat percentiles (usec): 00:13:08.684 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:08.684 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:08.684 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:08.684 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:08.684 | 99.99th=[42206] 00:13:08.684 bw ( KiB/s): min= 96, max= 97, per=27.77%, avg=96.17, stdev= 0.41, samples=6 00:13:08.684 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:13:08.684 lat (msec) : 50=98.84% 00:13:08.684 cpu : usr=0.09%, sys=0.00%, ctx=89, majf=0, minf=2 00:13:08.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.685 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=61052: Wed Nov 6 12:19:40 2024 00:13:08.685 read: IOPS=24, BW=98.1KiB/s (100kB/s)(292KiB/2978msec) 00:13:08.685 slat (nsec): min=12050, max=63960, avg=23419.78, stdev=5219.83 00:13:08.685 clat (usec): min=628, max=41193, avg=40413.63, stdev=4721.59 00:13:08.685 lat (usec): min=661, max=41215, avg=40437.03, stdev=4720.48 00:13:08.685 clat percentiles (usec): 00:13:08.685 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:08.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:08.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:08.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:08.685 | 99.99th=[41157] 00:13:08.685 bw ( KiB/s): min= 96, max= 104, per=28.63%, avg=99.20, stdev= 4.38, samples=5 00:13:08.685 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:13:08.685 lat (usec) : 750=1.35% 00:13:08.685 lat (msec) : 50=97.30% 00:13:08.685 cpu : usr=0.13%, sys=0.00%, ctx=75, majf=0, minf=2 00:13:08.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.685 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=61053: Wed Nov 6 12:19:40 2024 00:13:08.685 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2728msec) 00:13:08.685 slat (nsec): min=12870, max=37034, avg=23749.46, stdev=3402.71 00:13:08.685 clat (usec): min=627, max=41120, avg=40366.96, stdev=4928.91 00:13:08.685 lat (usec): min=662, max=41143, avg=40390.72, stdev=4927.51 00:13:08.685 clat percentiles (usec): 00:13:08.685 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:08.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:08.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:08.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:08.685 | 99.99th=[41157] 00:13:08.685 bw ( KiB/s): min= 96, max= 104, per=28.63%, avg=99.20, stdev= 4.38, samples=5 00:13:08.685 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:13:08.685 lat (usec) : 750=1.47% 00:13:08.685 lat (msec) : 50=97.06% 00:13:08.685 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:13:08.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.685 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.685 00:13:08.685 Run status group 0 (all jobs): 00:13:08.685 READ: bw=346KiB/s (354kB/s), 96.7KiB/s-98.2KiB/s (99.0kB/s-101kB/s), io=1216KiB (1245kB), run=2728-3517msec 00:13:08.685 00:13:08.685 Disk stats (read/write): 00:13:08.685 nvme0n1: ios=113/0, merge=0/0, ticks=4108/0, in_queue=4108, util=98.27% 00:13:08.685 nvme0n2: ios=80/0, merge=0/0, ticks=3285/0, in_queue=3285, util=95.68% 00:13:08.685 nvme0n3: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.36% 00:13:08.685 nvme0n4: ios=63/0, merge=0/0, ticks=2543/0, in_queue=2543, util=96.41% 00:13:08.685 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:08.685 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:09.251 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:09.251 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:09.251 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:09.251 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 60771 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:09.817 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:10.075 nvmf hotplug test: fio failed as expected 00:13:10.075 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.333 rmmod nvme_tcp 00:13:10.333 rmmod nvme_fabrics 00:13:10.333 rmmod nvme_keyring 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 57561 ']' 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 57561 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 57561 ']' 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 57561 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.333 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57561 00:13:10.592 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:10.592 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:10.592 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57561' 00:13:10.592 killing process with pid 57561 00:13:10.592 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 57561 00:13:10.592 12:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 57561 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.592 12:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.124 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.124 00:13:13.124 real 0m28.756s 00:13:13.124 user 2m23.877s 00:13:13.124 sys 0m7.883s 00:13:13.124 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.124 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.124 ************************************ 00:13:13.124 END TEST nvmf_fio_target 00:13:13.124 ************************************ 00:13:13.124 12:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.125 ************************************ 00:13:13.125 START TEST nvmf_bdevio 00:13:13.125 ************************************ 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:13.125 * Looking for test storage... 00:13:13.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.125 --rc genhtml_branch_coverage=1 00:13:13.125 --rc genhtml_function_coverage=1 00:13:13.125 --rc genhtml_legend=1 00:13:13.125 --rc geninfo_all_blocks=1 00:13:13.125 --rc geninfo_unexecuted_blocks=1 00:13:13.125 00:13:13.125 ' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.125 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.126 12:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.683 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.683 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.683 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:19.684 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:19.684 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:19.684 Found net devices under 0000:af:00.0: cvl_0_0 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:19.684 Found net devices under 0000:af:00.1: cvl_0_1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:13:19.684 00:13:19.684 --- 10.0.0.2 ping statistics --- 00:13:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.684 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:19.684 00:13:19.684 --- 10.0.0.1 ping statistics --- 00:13:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.684 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:19.684 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=65760 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 65760 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 65760 ']' 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 [2024-11-06 12:19:50.401089] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:13:19.685 [2024-11-06 12:19:50.401148] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.685 [2024-11-06 12:19:50.474274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.685 [2024-11-06 12:19:50.511279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.685 [2024-11-06 12:19:50.511317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.685 [2024-11-06 12:19:50.511324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.685 [2024-11-06 12:19:50.511329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.685 [2024-11-06 12:19:50.511334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.685 [2024-11-06 12:19:50.512947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:19.685 [2024-11-06 12:19:50.513061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:19.685 [2024-11-06 12:19:50.513176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.685 [2024-11-06 12:19:50.513177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 [2024-11-06 12:19:50.677893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 Malloc0 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.685 [2024-11-06 12:19:50.740985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:19.685 { 00:13:19.685 "params": { 00:13:19.685 "name": "Nvme$subsystem", 00:13:19.685 "trtype": "$TEST_TRANSPORT", 00:13:19.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:19.685 "adrfam": "ipv4", 00:13:19.685 "trsvcid": "$NVMF_PORT", 00:13:19.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:19.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:19.685 "hdgst": ${hdgst:-false}, 00:13:19.685 "ddgst": ${ddgst:-false} 00:13:19.685 }, 00:13:19.685 "method": "bdev_nvme_attach_controller" 00:13:19.685 } 00:13:19.685 EOF 00:13:19.685 )") 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:19.685 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:19.685 "params": { 00:13:19.685 "name": "Nvme1", 00:13:19.685 "trtype": "tcp", 00:13:19.685 "traddr": "10.0.0.2", 00:13:19.685 "adrfam": "ipv4", 00:13:19.685 "trsvcid": "4420", 00:13:19.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.685 "hdgst": false, 00:13:19.685 "ddgst": false 00:13:19.685 }, 00:13:19.685 "method": "bdev_nvme_attach_controller" 00:13:19.685 }' 00:13:19.685 [2024-11-06 12:19:50.789368] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:13:19.685 [2024-11-06 12:19:50.789407] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65873 ] 00:13:19.685 [2024-11-06 12:19:50.872403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.685 [2024-11-06 12:19:50.924249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.685 [2024-11-06 12:19:50.924351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.685 [2024-11-06 12:19:50.924353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.685 I/O targets: 00:13:19.685 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:19.685 00:13:19.685 00:13:19.685 CUnit - A unit testing framework for C - Version 2.1-3 00:13:19.685 http://cunit.sourceforge.net/ 00:13:19.685 00:13:19.685 00:13:19.685 Suite: bdevio tests on: Nvme1n1 00:13:19.685 Test: blockdev write read block ...passed 00:13:19.685 Test: blockdev write zeroes read block ...passed 00:13:19.685 Test: blockdev write zeroes read no split ...passed 00:13:19.685 Test: blockdev write zeroes read split ...passed 00:13:19.685 Test: blockdev write zeroes read split partial ...passed 00:13:19.685 Test: blockdev reset ...[2024-11-06 12:19:51.202929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:19.685 [2024-11-06 12:19:51.203008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ff7c0 (9): Bad file descriptor 00:13:19.686 [2024-11-06 12:19:51.218601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:19.686 passed 00:13:19.686 Test: blockdev write read 8 blocks ...passed 00:13:19.686 Test: blockdev write read size > 128k ...passed 00:13:19.686 Test: blockdev write read invalid size ...passed 00:13:19.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:19.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:19.943 Test: blockdev write read max offset ...passed 00:13:19.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:19.943 Test: blockdev writev readv 8 blocks ...passed 00:13:19.943 Test: blockdev writev readv 30 x 1block ...passed 00:13:19.943 Test: blockdev writev readv block ...passed 00:13:19.943 Test: blockdev writev readv size > 128k ...passed 00:13:19.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:19.943 Test: blockdev comparev and writev ...[2024-11-06 12:19:51.428034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.428868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.943 [2024-11-06 12:19:51.428876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:19.943 passed 00:13:19.943 Test: blockdev nvme passthru rw ...passed 00:13:19.943 Test: blockdev nvme passthru vendor specific ...[2024-11-06 12:19:51.510754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.943 [2024-11-06 12:19:51.510770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.510870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.943 [2024-11-06 12:19:51.510880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.510983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.943 [2024-11-06 12:19:51.510992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:19.943 [2024-11-06 12:19:51.511095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.943 [2024-11-06 12:19:51.511105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:19.943 passed 00:13:19.943 Test: blockdev nvme admin passthru ...passed 00:13:20.201 Test: blockdev copy ...passed 00:13:20.201 00:13:20.201 Run Summary: Type Total Ran Passed Failed Inactive 00:13:20.201 suites 1 1 n/a 0 0 00:13:20.201 tests 23 23 23 0 0 00:13:20.201 asserts 152 152 152 0 n/a 00:13:20.201 00:13:20.201 Elapsed time = 0.949 seconds 00:13:20.201 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.201 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.202 rmmod nvme_tcp 00:13:20.202 rmmod nvme_fabrics 00:13:20.202 rmmod nvme_keyring 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 65760 ']' 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 65760 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 65760 ']' 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 65760 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.202 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65760 00:13:20.460 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:20.460 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:20.460 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65760' 00:13:20.460 killing process with pid 65760 00:13:20.460 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 65760 00:13:20.460 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 65760 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.460 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.993 12:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.994 00:13:22.994 real 0m9.788s 00:13:22.994 user 0m9.574s 00:13:22.994 sys 0m4.911s 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:22.994 ************************************ 00:13:22.994 END TEST nvmf_bdevio 00:13:22.994 ************************************ 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:22.994 00:13:22.994 real 4m42.333s 00:13:22.994 user 11m34.468s 00:13:22.994 sys 1m34.244s 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:22.994 ************************************ 00:13:22.994 END TEST nvmf_target_core 00:13:22.994 ************************************ 00:13:22.994 12:19:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:22.994 12:19:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:22.994 12:19:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.994 12:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.994 ************************************ 00:13:22.994 START TEST nvmf_target_extra 00:13:22.994 ************************************ 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:22.994 * Looking for test storage... 00:13:22.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:22.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.994 --rc genhtml_branch_coverage=1 00:13:22.994 --rc genhtml_function_coverage=1 00:13:22.994 --rc genhtml_legend=1 00:13:22.994 --rc geninfo_all_blocks=1 00:13:22.994 --rc geninfo_unexecuted_blocks=1 00:13:22.994 00:13:22.994 ' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:22.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.994 --rc genhtml_branch_coverage=1 00:13:22.994 --rc genhtml_function_coverage=1 00:13:22.994 --rc genhtml_legend=1 00:13:22.994 --rc geninfo_all_blocks=1 00:13:22.994 --rc geninfo_unexecuted_blocks=1 00:13:22.994 00:13:22.994 ' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:22.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.994 --rc genhtml_branch_coverage=1 00:13:22.994 --rc genhtml_function_coverage=1 00:13:22.994 --rc genhtml_legend=1 00:13:22.994 --rc geninfo_all_blocks=1 00:13:22.994 --rc geninfo_unexecuted_blocks=1 00:13:22.994 00:13:22.994 ' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:22.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.994 --rc genhtml_branch_coverage=1 00:13:22.994 --rc genhtml_function_coverage=1 00:13:22.994 --rc genhtml_legend=1 00:13:22.994 --rc geninfo_all_blocks=1 00:13:22.994 --rc geninfo_unexecuted_blocks=1 00:13:22.994 00:13:22.994 ' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.994 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.995 ************************************ 00:13:22.995 START TEST nvmf_example 00:13:22.995 ************************************ 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:22.995 * Looking for test storage... 00:13:22.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:22.995 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.254 --rc genhtml_branch_coverage=1 00:13:23.254 --rc genhtml_function_coverage=1 00:13:23.254 --rc genhtml_legend=1 00:13:23.254 --rc geninfo_all_blocks=1 00:13:23.254 --rc geninfo_unexecuted_blocks=1 00:13:23.254 00:13:23.254 ' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.254 --rc genhtml_branch_coverage=1 00:13:23.254 --rc genhtml_function_coverage=1 00:13:23.254 --rc genhtml_legend=1 00:13:23.254 --rc geninfo_all_blocks=1 00:13:23.254 --rc geninfo_unexecuted_blocks=1 00:13:23.254 00:13:23.254 ' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.254 --rc genhtml_branch_coverage=1 00:13:23.254 --rc genhtml_function_coverage=1 00:13:23.254 --rc genhtml_legend=1 00:13:23.254 --rc geninfo_all_blocks=1 00:13:23.254 --rc geninfo_unexecuted_blocks=1 00:13:23.254 00:13:23.254 ' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.254 --rc genhtml_branch_coverage=1 00:13:23.254 --rc genhtml_function_coverage=1 00:13:23.254 --rc genhtml_legend=1 00:13:23.254 --rc geninfo_all_blocks=1 00:13:23.254 --rc geninfo_unexecuted_blocks=1 00:13:23.254 00:13:23.254 ' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.254 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.255 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:28.520 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.520 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:28.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:28.521 Found net devices under 0000:af:00.0: cvl_0_0 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:28.521 Found net devices under 0000:af:00.1: cvl_0_1 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.521 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:13:28.779 00:13:28.779 --- 10.0.0.2 ping statistics --- 00:13:28.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.779 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:13:28.779 00:13:28.779 --- 10.0.0.1 ping statistics --- 00:13:28.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.779 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=69803 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 69803 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 69803 ']' 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.779 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:30.152 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:40.114 Initializing NVMe Controllers 00:13:40.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.114 Initialization complete. Launching workers. 00:13:40.114 ======================================================== 00:13:40.114 Latency(us) 00:13:40.114 Device Information : IOPS MiB/s Average min max 00:13:40.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18761.18 73.29 3410.48 730.16 19042.04 00:13:40.114 ======================================================== 00:13:40.114 Total : 18761.18 73.29 3410.48 730.16 19042.04 00:13:40.114 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.114 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.114 rmmod nvme_tcp 00:13:40.114 rmmod nvme_fabrics 00:13:40.114 rmmod nvme_keyring 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 69803 ']' 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 69803 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 69803 ']' 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 69803 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69803 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69803' 00:13:40.372 killing process with pid 69803 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 69803 00:13:40.372 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 69803 00:13:40.631 nvmf threads initialize successfully 00:13:40.631 bdev subsystem init successfully 00:13:40.631 created a nvmf target service 00:13:40.631 create targets's poll groups done 00:13:40.631 all subsystems of target started 00:13:40.631 nvmf target is running 00:13:40.631 all subsystems of target stopped 00:13:40.631 destroy targets's poll groups done 00:13:40.631 destroyed the nvmf target service 00:13:40.631 bdev subsystem finish successfully 00:13:40.631 nvmf threads destroy successfully 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.631 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:42.533 00:13:42.533 real 0m19.656s 00:13:42.533 user 0m46.593s 00:13:42.533 sys 0m5.711s 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.533 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:42.533 ************************************ 00:13:42.533 END TEST nvmf_example 00:13:42.533 ************************************ 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.793 ************************************ 00:13:42.793 START TEST nvmf_filesystem 00:13:42.793 ************************************ 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:42.793 * Looking for test storage... 00:13:42.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.793 --rc genhtml_branch_coverage=1 00:13:42.793 --rc genhtml_function_coverage=1 00:13:42.793 --rc genhtml_legend=1 00:13:42.793 --rc geninfo_all_blocks=1 00:13:42.793 --rc geninfo_unexecuted_blocks=1 00:13:42.793 00:13:42.793 ' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.793 --rc genhtml_branch_coverage=1 00:13:42.793 --rc genhtml_function_coverage=1 00:13:42.793 --rc genhtml_legend=1 00:13:42.793 --rc geninfo_all_blocks=1 00:13:42.793 --rc geninfo_unexecuted_blocks=1 00:13:42.793 00:13:42.793 ' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.793 --rc genhtml_branch_coverage=1 00:13:42.793 --rc genhtml_function_coverage=1 00:13:42.793 --rc genhtml_legend=1 00:13:42.793 --rc geninfo_all_blocks=1 00:13:42.793 --rc geninfo_unexecuted_blocks=1 00:13:42.793 00:13:42.793 ' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.793 --rc genhtml_branch_coverage=1 00:13:42.793 --rc genhtml_function_coverage=1 00:13:42.793 --rc genhtml_legend=1 00:13:42.793 --rc geninfo_all_blocks=1 00:13:42.793 --rc geninfo_unexecuted_blocks=1 00:13:42.793 00:13:42.793 ' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:42.793 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:42.794 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:42.794 #define SPDK_CONFIG_H 00:13:42.794 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:42.794 #define SPDK_CONFIG_APPS 1 00:13:42.794 #define SPDK_CONFIG_ARCH native 00:13:42.794 #undef SPDK_CONFIG_ASAN 00:13:42.794 #undef SPDK_CONFIG_AVAHI 00:13:42.794 #undef SPDK_CONFIG_CET 00:13:42.794 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:42.794 #define SPDK_CONFIG_COVERAGE 1 00:13:42.794 #define SPDK_CONFIG_CROSS_PREFIX 00:13:42.794 #undef SPDK_CONFIG_CRYPTO 00:13:42.794 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:42.794 #undef SPDK_CONFIG_CUSTOMOCF 00:13:42.794 #undef SPDK_CONFIG_DAOS 00:13:42.794 #define SPDK_CONFIG_DAOS_DIR 00:13:42.794 #define SPDK_CONFIG_DEBUG 1 00:13:42.794 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:42.794 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:42.794 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:42.794 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:42.794 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:42.794 #undef SPDK_CONFIG_DPDK_UADK 00:13:42.794 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:42.794 #define SPDK_CONFIG_EXAMPLES 1 00:13:42.794 #undef SPDK_CONFIG_FC 00:13:42.794 #define SPDK_CONFIG_FC_PATH 00:13:42.794 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:42.794 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:42.794 #define SPDK_CONFIG_FSDEV 1 00:13:42.794 #undef SPDK_CONFIG_FUSE 00:13:42.794 #undef SPDK_CONFIG_FUZZER 00:13:42.794 #define SPDK_CONFIG_FUZZER_LIB 00:13:42.794 #undef SPDK_CONFIG_GOLANG 00:13:42.794 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:42.794 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:42.794 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:42.794 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:42.795 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:42.795 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:42.795 #undef SPDK_CONFIG_HAVE_LZ4 00:13:42.795 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:42.795 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:42.795 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:42.795 #define SPDK_CONFIG_IDXD 1 00:13:42.795 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:42.795 #undef SPDK_CONFIG_IPSEC_MB 00:13:42.795 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:42.795 #define SPDK_CONFIG_ISAL 1 00:13:42.795 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:42.795 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:42.795 #define SPDK_CONFIG_LIBDIR 00:13:42.795 #undef SPDK_CONFIG_LTO 00:13:42.795 #define SPDK_CONFIG_MAX_LCORES 128 00:13:42.795 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:42.795 #define SPDK_CONFIG_NVME_CUSE 1 00:13:42.795 #undef SPDK_CONFIG_OCF 00:13:42.795 #define SPDK_CONFIG_OCF_PATH 00:13:42.795 #define SPDK_CONFIG_OPENSSL_PATH 00:13:42.795 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:42.795 #define SPDK_CONFIG_PGO_DIR 00:13:42.795 #undef SPDK_CONFIG_PGO_USE 00:13:42.795 #define SPDK_CONFIG_PREFIX /usr/local 00:13:42.795 #undef SPDK_CONFIG_RAID5F 00:13:42.795 #undef SPDK_CONFIG_RBD 00:13:42.795 #define SPDK_CONFIG_RDMA 1 00:13:42.795 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:42.795 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:42.795 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:42.795 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:42.795 #define SPDK_CONFIG_SHARED 1 00:13:42.795 #undef SPDK_CONFIG_SMA 00:13:42.795 #define SPDK_CONFIG_TESTS 1 00:13:42.795 #undef SPDK_CONFIG_TSAN 00:13:42.795 #define SPDK_CONFIG_UBLK 1 00:13:42.795 #define SPDK_CONFIG_UBSAN 1 00:13:42.795 #undef SPDK_CONFIG_UNIT_TESTS 00:13:42.795 #undef SPDK_CONFIG_URING 00:13:42.795 #define SPDK_CONFIG_URING_PATH 00:13:42.795 #undef SPDK_CONFIG_URING_ZNS 00:13:42.795 #undef SPDK_CONFIG_USDT 00:13:42.795 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:42.795 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:42.795 #define SPDK_CONFIG_VFIO_USER 1 00:13:42.795 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:42.795 #define SPDK_CONFIG_VHOST 1 00:13:42.795 #define SPDK_CONFIG_VIRTIO 1 00:13:42.795 #undef SPDK_CONFIG_VTUNE 00:13:42.795 #define SPDK_CONFIG_VTUNE_DIR 00:13:42.795 #define SPDK_CONFIG_WERROR 1 00:13:42.795 #define SPDK_CONFIG_WPDK_DIR 00:13:42.795 #undef SPDK_CONFIG_XNVME 00:13:42.795 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:42.795 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:42.795 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.795 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:43.055 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:43.056 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 72392 ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 72392 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.DYKCZb 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DYKCZb/tests/target /tmp/spdk.DYKCZb 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=83358760960 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=94489735168 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11130974208 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47233499136 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47244865536 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:43.057 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=18874843136 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=18897948672 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23105536 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47243870208 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47244869632 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=999424 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9448960000 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9448972288 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:43.058 * Looking for test storage... 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=83358760960 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13345566720 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.058 --rc genhtml_branch_coverage=1 00:13:43.058 --rc genhtml_function_coverage=1 00:13:43.058 --rc genhtml_legend=1 00:13:43.058 --rc geninfo_all_blocks=1 00:13:43.058 --rc geninfo_unexecuted_blocks=1 00:13:43.058 00:13:43.058 ' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.058 --rc genhtml_branch_coverage=1 00:13:43.058 --rc genhtml_function_coverage=1 00:13:43.058 --rc genhtml_legend=1 00:13:43.058 --rc geninfo_all_blocks=1 00:13:43.058 --rc geninfo_unexecuted_blocks=1 00:13:43.058 00:13:43.058 ' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.058 --rc genhtml_branch_coverage=1 00:13:43.058 --rc genhtml_function_coverage=1 00:13:43.058 --rc genhtml_legend=1 00:13:43.058 --rc geninfo_all_blocks=1 00:13:43.058 --rc geninfo_unexecuted_blocks=1 00:13:43.058 00:13:43.058 ' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.058 --rc genhtml_branch_coverage=1 00:13:43.058 --rc genhtml_function_coverage=1 00:13:43.058 --rc genhtml_legend=1 00:13:43.058 --rc geninfo_all_blocks=1 00:13:43.058 --rc geninfo_unexecuted_blocks=1 00:13:43.058 00:13:43.058 ' 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.058 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.059 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:48.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:48.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:48.316 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:48.317 Found net devices under 0000:af:00.0: cvl_0_0 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:48.317 Found net devices under 0000:af:00.1: cvl_0_1 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.317 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:13:48.574 00:13:48.574 --- 10.0.0.2 ping statistics --- 00:13:48.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.574 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:13:48.574 00:13:48.574 --- 10.0.0.1 ping statistics --- 00:13:48.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.574 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.574 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 ************************************ 00:13:48.832 START TEST nvmf_filesystem_no_in_capsule 00:13:48.832 ************************************ 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=75560 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 75560 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 75560 ']' 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.832 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 [2024-11-06 12:20:20.307281] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:13:48.832 [2024-11-06 12:20:20.307340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.832 [2024-11-06 12:20:20.401301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.089 [2024-11-06 12:20:20.451587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.089 [2024-11-06 12:20:20.451628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.089 [2024-11-06 12:20:20.451638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.089 [2024-11-06 12:20:20.451647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.089 [2024-11-06 12:20:20.451655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.089 [2024-11-06 12:20:20.456353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.089 [2024-11-06 12:20:20.456371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.089 [2024-11-06 12:20:20.456486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.089 [2024-11-06 12:20:20.456491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.652 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.653 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:49.653 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.653 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.653 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 [2024-11-06 12:20:21.283144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 [2024-11-06 12:20:21.426241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.910 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:49.910 { 00:13:49.910 "name": "Malloc1", 00:13:49.910 "aliases": [ 00:13:49.910 "79b03e18-15b5-4ba1-b727-d46d238afa15" 00:13:49.910 ], 00:13:49.910 "product_name": "Malloc disk", 00:13:49.910 "block_size": 512, 00:13:49.910 "num_blocks": 1048576, 00:13:49.910 "uuid": "79b03e18-15b5-4ba1-b727-d46d238afa15", 00:13:49.910 "assigned_rate_limits": { 00:13:49.910 "rw_ios_per_sec": 0, 00:13:49.910 "rw_mbytes_per_sec": 0, 00:13:49.910 "r_mbytes_per_sec": 0, 00:13:49.910 "w_mbytes_per_sec": 0 00:13:49.910 }, 00:13:49.910 "claimed": true, 00:13:49.910 "claim_type": "exclusive_write", 00:13:49.910 "zoned": false, 00:13:49.910 "supported_io_types": { 00:13:49.910 "read": true, 00:13:49.910 "write": true, 00:13:49.910 "unmap": true, 00:13:49.910 "flush": true, 00:13:49.911 "reset": true, 00:13:49.911 "nvme_admin": false, 00:13:49.911 "nvme_io": false, 00:13:49.911 "nvme_io_md": false, 00:13:49.911 "write_zeroes": true, 00:13:49.911 "zcopy": true, 00:13:49.911 "get_zone_info": false, 00:13:49.911 "zone_management": false, 00:13:49.911 "zone_append": false, 00:13:49.911 "compare": false, 00:13:49.911 "compare_and_write": false, 00:13:49.911 "abort": true, 00:13:49.911 "seek_hole": false, 00:13:49.911 "seek_data": false, 00:13:49.911 "copy": true, 00:13:49.911 "nvme_iov_md": false 00:13:49.911 }, 00:13:49.911 "memory_domains": [ 00:13:49.911 { 00:13:49.911 "dma_device_id": "system", 00:13:49.911 "dma_device_type": 1 00:13:49.911 }, 00:13:49.911 { 00:13:49.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.911 "dma_device_type": 2 00:13:49.911 } 00:13:49.911 ], 00:13:49.911 "driver_specific": {} 00:13:49.911 } 00:13:49.911 ]' 00:13:49.911 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:49.911 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:49.911 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:50.168 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:50.168 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:50.168 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:50.168 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:50.168 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.537 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.537 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:51.537 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.537 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:51.537 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:53.431 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:53.689 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:53.689 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.057 ************************************ 00:13:55.057 START TEST filesystem_ext4 00:13:55.057 ************************************ 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:55.057 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:55.058 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:55.058 mke2fs 1.47.0 (5-Feb-2023) 00:13:55.058 Discarding device blocks: 0/522240 done 00:13:55.058 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:55.058 Filesystem UUID: 1c59ba55-41f9-4762-8add-83eb3b730964 00:13:55.058 Superblock backups stored on blocks: 00:13:55.058 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:55.058 00:13:55.058 Allocating group tables: 0/64 done 00:13:55.058 Writing inode tables: 0/64 done 00:13:55.058 Creating journal (8192 blocks): done 00:13:57.246 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:13:57.246 00:13:57.246 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:57.246 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 75560 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:03.793 00:14:03.793 real 0m8.545s 00:14:03.793 user 0m0.025s 00:14:03.793 sys 0m0.078s 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:03.793 ************************************ 00:14:03.793 END TEST filesystem_ext4 00:14:03.793 ************************************ 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:03.793 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.793 ************************************ 00:14:03.793 START TEST filesystem_btrfs 00:14:03.793 ************************************ 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:14:03.794 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:03.794 btrfs-progs v6.8.1 00:14:03.794 See https://btrfs.readthedocs.io for more information. 00:14:03.794 00:14:03.794 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:03.794 NOTE: several default settings have changed in version 5.15, please make sure 00:14:03.794 this does not affect your deployments: 00:14:03.794 - DUP for metadata (-m dup) 00:14:03.794 - enabled no-holes (-O no-holes) 00:14:03.794 - enabled free-space-tree (-R free-space-tree) 00:14:03.794 00:14:03.794 Label: (null) 00:14:03.794 UUID: ba907946-f876-4a7e-adf8-af082a42acdf 00:14:03.794 Node size: 16384 00:14:03.794 Sector size: 4096 (CPU page size: 4096) 00:14:03.794 Filesystem size: 510.00MiB 00:14:03.794 Block group profiles: 00:14:03.794 Data: single 8.00MiB 00:14:03.794 Metadata: DUP 32.00MiB 00:14:03.794 System: DUP 8.00MiB 00:14:03.794 SSD detected: yes 00:14:03.794 Zoned device: no 00:14:03.794 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:03.794 Checksum: crc32c 00:14:03.794 Number of devices: 1 00:14:03.794 Devices: 00:14:03.794 ID SIZE PATH 00:14:03.794 1 510.00MiB /dev/nvme0n1p1 00:14:03.794 00:14:03.794 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:14:03.794 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 75560 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:04.051 00:14:04.051 real 0m0.725s 00:14:04.051 user 0m0.034s 00:14:04.051 sys 0m0.107s 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:04.051 ************************************ 00:14:04.051 END TEST filesystem_btrfs 00:14:04.051 ************************************ 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.051 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.308 ************************************ 00:14:04.308 START TEST filesystem_xfs 00:14:04.308 ************************************ 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:14:04.308 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:04.308 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:04.308 = sectsz=512 attr=2, projid32bit=1 00:14:04.308 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:04.308 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:04.308 data = bsize=4096 blocks=130560, imaxpct=25 00:14:04.308 = sunit=0 swidth=0 blks 00:14:04.308 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:04.308 log =internal log bsize=4096 blocks=16384, version=2 00:14:04.308 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:04.308 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:05.678 Discarding blocks...Done. 00:14:05.678 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:14:05.678 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 75560 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:08.201 00:14:08.201 real 0m3.752s 00:14:08.201 user 0m0.022s 00:14:08.201 sys 0m0.076s 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:08.201 ************************************ 00:14:08.201 END TEST filesystem_xfs 00:14:08.201 ************************************ 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:08.201 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 75560 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 75560 ']' 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 75560 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:08.458 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75560 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75560' 00:14:08.459 killing process with pid 75560 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 75560 00:14:08.459 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 75560 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:08.717 00:14:08.717 real 0m20.031s 00:14:08.717 user 1m19.013s 00:14:08.717 sys 0m1.507s 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.717 ************************************ 00:14:08.717 END TEST nvmf_filesystem_no_in_capsule 00:14:08.717 ************************************ 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:08.717 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:08.974 ************************************ 00:14:08.974 START TEST nvmf_filesystem_in_capsule 00:14:08.974 ************************************ 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=79463 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 79463 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 79463 ']' 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:08.975 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.975 [2024-11-06 12:20:40.411897] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:14:08.975 [2024-11-06 12:20:40.411954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.975 [2024-11-06 12:20:40.513599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.975 [2024-11-06 12:20:40.559811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.975 [2024-11-06 12:20:40.559861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.975 [2024-11-06 12:20:40.559871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.975 [2024-11-06 12:20:40.559880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.975 [2024-11-06 12:20:40.559888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.975 [2024-11-06 12:20:40.561945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.975 [2024-11-06 12:20:40.562051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.975 [2024-11-06 12:20:40.562139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.975 [2024-11-06 12:20:40.562140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.232 [2024-11-06 12:20:40.701212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.232 Malloc1 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.489 [2024-11-06 12:20:40.849655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:09.489 { 00:14:09.489 "name": "Malloc1", 00:14:09.489 "aliases": [ 00:14:09.489 "1f48a038-3502-48a1-890a-e908521cfa1f" 00:14:09.489 ], 00:14:09.489 "product_name": "Malloc disk", 00:14:09.489 "block_size": 512, 00:14:09.489 "num_blocks": 1048576, 00:14:09.489 "uuid": "1f48a038-3502-48a1-890a-e908521cfa1f", 00:14:09.489 "assigned_rate_limits": { 00:14:09.489 "rw_ios_per_sec": 0, 00:14:09.489 "rw_mbytes_per_sec": 0, 00:14:09.489 "r_mbytes_per_sec": 0, 00:14:09.489 "w_mbytes_per_sec": 0 00:14:09.489 }, 00:14:09.489 "claimed": true, 00:14:09.489 "claim_type": "exclusive_write", 00:14:09.489 "zoned": false, 00:14:09.489 "supported_io_types": { 00:14:09.489 "read": true, 00:14:09.489 "write": true, 00:14:09.489 "unmap": true, 00:14:09.489 "flush": true, 00:14:09.489 "reset": true, 00:14:09.489 "nvme_admin": false, 00:14:09.489 "nvme_io": false, 00:14:09.489 "nvme_io_md": false, 00:14:09.489 "write_zeroes": true, 00:14:09.489 "zcopy": true, 00:14:09.489 "get_zone_info": false, 00:14:09.489 "zone_management": false, 00:14:09.489 "zone_append": false, 00:14:09.489 "compare": false, 00:14:09.489 "compare_and_write": false, 00:14:09.489 "abort": true, 00:14:09.489 "seek_hole": false, 00:14:09.489 "seek_data": false, 00:14:09.489 "copy": true, 00:14:09.489 "nvme_iov_md": false 00:14:09.489 }, 00:14:09.489 "memory_domains": [ 00:14:09.489 { 00:14:09.489 "dma_device_id": "system", 00:14:09.489 "dma_device_type": 1 00:14:09.489 }, 00:14:09.489 { 00:14:09.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.489 "dma_device_type": 2 00:14:09.489 } 00:14:09.489 ], 00:14:09.489 "driver_specific": {} 00:14:09.489 } 00:14:09.489 ]' 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:09.489 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:09.490 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.857 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.857 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:14:10.857 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.857 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:10.857 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:12.751 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:13.009 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:13.267 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.200 ************************************ 00:14:14.200 START TEST filesystem_in_capsule_ext4 00:14:14.200 ************************************ 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:14:14.200 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:14:14.201 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:14.201 mke2fs 1.47.0 (5-Feb-2023) 00:14:14.201 Discarding device blocks: 0/522240 done 00:14:14.458 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:14.458 Filesystem UUID: feb0297f-5137-42a6-b504-f525c7d724c0 00:14:14.458 Superblock backups stored on blocks: 00:14:14.458 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:14.458 00:14:14.458 Allocating group tables: 0/64 done 00:14:14.458 Writing inode tables: 0/64 done 00:14:14.717 Creating journal (8192 blocks): done 00:14:14.717 Writing superblocks and filesystem accounting information: 0/64 done 00:14:14.717 00:14:14.717 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:14:14.717 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 79463 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:19.979 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:19.980 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:19.980 00:14:19.980 real 0m5.879s 00:14:19.980 user 0m0.027s 00:14:19.980 sys 0m0.070s 00:14:19.980 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:19.980 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:19.980 ************************************ 00:14:19.980 END TEST filesystem_in_capsule_ext4 00:14:19.980 ************************************ 00:14:19.980 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:19.980 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.238 ************************************ 00:14:20.238 START TEST filesystem_in_capsule_btrfs 00:14:20.238 ************************************ 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:20.238 btrfs-progs v6.8.1 00:14:20.238 See https://btrfs.readthedocs.io for more information. 00:14:20.238 00:14:20.238 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:20.238 NOTE: several default settings have changed in version 5.15, please make sure 00:14:20.238 this does not affect your deployments: 00:14:20.238 - DUP for metadata (-m dup) 00:14:20.238 - enabled no-holes (-O no-holes) 00:14:20.238 - enabled free-space-tree (-R free-space-tree) 00:14:20.238 00:14:20.238 Label: (null) 00:14:20.238 UUID: eae8ffac-1c68-4050-a156-1c341fcc1cdf 00:14:20.238 Node size: 16384 00:14:20.238 Sector size: 4096 (CPU page size: 4096) 00:14:20.238 Filesystem size: 510.00MiB 00:14:20.238 Block group profiles: 00:14:20.238 Data: single 8.00MiB 00:14:20.238 Metadata: DUP 32.00MiB 00:14:20.238 System: DUP 8.00MiB 00:14:20.238 SSD detected: yes 00:14:20.238 Zoned device: no 00:14:20.238 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:20.238 Checksum: crc32c 00:14:20.238 Number of devices: 1 00:14:20.238 Devices: 00:14:20.238 ID SIZE PATH 00:14:20.238 1 510.00MiB /dev/nvme0n1p1 00:14:20.238 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:14:20.238 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 79463 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:21.172 00:14:21.172 real 0m1.130s 00:14:21.172 user 0m0.029s 00:14:21.172 sys 0m0.114s 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.172 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 ************************************ 00:14:21.172 END TEST filesystem_in_capsule_btrfs 00:14:21.172 ************************************ 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:21.430 ************************************ 00:14:21.430 START TEST filesystem_in_capsule_xfs 00:14:21.430 ************************************ 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:14:21.430 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:21.430 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:21.430 = sectsz=512 attr=2, projid32bit=1 00:14:21.430 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:21.431 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:21.431 data = bsize=4096 blocks=130560, imaxpct=25 00:14:21.431 = sunit=0 swidth=0 blks 00:14:21.431 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:21.431 log =internal log bsize=4096 blocks=16384, version=2 00:14:21.431 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:21.431 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:22.365 Discarding blocks...Done. 00:14:22.365 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:14:22.365 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 79463 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:24.894 00:14:24.894 real 0m3.588s 00:14:24.894 user 0m0.025s 00:14:24.894 sys 0m0.074s 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:24.894 ************************************ 00:14:24.894 END TEST filesystem_in_capsule_xfs 00:14:24.894 ************************************ 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:24.894 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 79463 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 79463 ']' 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 79463 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79463 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79463' 00:14:25.152 killing process with pid 79463 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 79463 00:14:25.152 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 79463 00:14:25.411 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:25.411 00:14:25.411 real 0m16.677s 00:14:25.411 user 1m5.523s 00:14:25.411 sys 0m1.382s 00:14:25.411 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:25.411 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.411 ************************************ 00:14:25.411 END TEST nvmf_filesystem_in_capsule 00:14:25.411 ************************************ 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.670 rmmod nvme_tcp 00:14:25.670 rmmod nvme_fabrics 00:14:25.670 rmmod nvme_keyring 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.670 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:28.203 00:14:28.203 real 0m45.015s 00:14:28.203 user 2m26.497s 00:14:28.203 sys 0m7.260s 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.203 ************************************ 00:14:28.203 END TEST nvmf_filesystem 00:14:28.203 ************************************ 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.203 ************************************ 00:14:28.203 START TEST nvmf_target_discovery 00:14:28.203 ************************************ 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:28.203 * Looking for test storage... 00:14:28.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:28.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.203 --rc genhtml_branch_coverage=1 00:14:28.203 --rc genhtml_function_coverage=1 00:14:28.203 --rc genhtml_legend=1 00:14:28.203 --rc geninfo_all_blocks=1 00:14:28.203 --rc geninfo_unexecuted_blocks=1 00:14:28.203 00:14:28.203 ' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:28.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.203 --rc genhtml_branch_coverage=1 00:14:28.203 --rc genhtml_function_coverage=1 00:14:28.203 --rc genhtml_legend=1 00:14:28.203 --rc geninfo_all_blocks=1 00:14:28.203 --rc geninfo_unexecuted_blocks=1 00:14:28.203 00:14:28.203 ' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:28.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.203 --rc genhtml_branch_coverage=1 00:14:28.203 --rc genhtml_function_coverage=1 00:14:28.203 --rc genhtml_legend=1 00:14:28.203 --rc geninfo_all_blocks=1 00:14:28.203 --rc geninfo_unexecuted_blocks=1 00:14:28.203 00:14:28.203 ' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:28.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.203 --rc genhtml_branch_coverage=1 00:14:28.203 --rc genhtml_function_coverage=1 00:14:28.203 --rc genhtml_legend=1 00:14:28.203 --rc geninfo_all_blocks=1 00:14:28.203 --rc geninfo_unexecuted_blocks=1 00:14:28.203 00:14:28.203 ' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.203 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.204 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:33.469 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:33.469 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:33.469 Found net devices under 0000:af:00.0: cvl_0_0 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:33.469 Found net devices under 0000:af:00.1: cvl_0_1 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.469 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.470 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.470 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:33.470 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:33.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:14:33.470 00:14:33.470 --- 10.0.0.2 ping statistics --- 00:14:33.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.470 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:14:33.470 00:14:33.470 --- 10.0.0.1 ping statistics --- 00:14:33.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.470 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:33.470 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=86429 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 86429 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 86429 ']' 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.730 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.730 [2024-11-06 12:21:05.178333] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:14:33.730 [2024-11-06 12:21:05.178392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.730 [2024-11-06 12:21:05.280302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.730 [2024-11-06 12:21:05.329918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.730 [2024-11-06 12:21:05.329963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.730 [2024-11-06 12:21:05.329973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.730 [2024-11-06 12:21:05.329984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.730 [2024-11-06 12:21:05.329992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.730 [2024-11-06 12:21:05.332035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.730 [2024-11-06 12:21:05.332058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.730 [2024-11-06 12:21:05.332168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.730 [2024-11-06 12:21:05.332169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 [2024-11-06 12:21:06.104939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 Null1 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 [2024-11-06 12:21:06.146649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.668 Null2 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:34.668 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 Null3 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 Null4 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:34.929 00:14:34.929 Discovery Log Number of Records 6, Generation counter 6 00:14:34.929 =====Discovery Log Entry 0====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: current discovery subsystem 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4420 00:14:34.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: explicit discovery connections, duplicate discovery information 00:14:34.929 sectype: none 00:14:34.929 =====Discovery Log Entry 1====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: nvme subsystem 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4420 00:14:34.929 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: none 00:14:34.929 sectype: none 00:14:34.929 =====Discovery Log Entry 2====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: nvme subsystem 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4420 00:14:34.929 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: none 00:14:34.929 sectype: none 00:14:34.929 =====Discovery Log Entry 3====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: nvme subsystem 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4420 00:14:34.929 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: none 00:14:34.929 sectype: none 00:14:34.929 =====Discovery Log Entry 4====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: nvme subsystem 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4420 00:14:34.929 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: none 00:14:34.929 sectype: none 00:14:34.929 =====Discovery Log Entry 5====== 00:14:34.929 trtype: tcp 00:14:34.929 adrfam: ipv4 00:14:34.929 subtype: discovery subsystem referral 00:14:34.929 treq: not required 00:14:34.929 portid: 0 00:14:34.929 trsvcid: 4430 00:14:34.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:34.929 traddr: 10.0.0.2 00:14:34.929 eflags: none 00:14:34.929 sectype: none 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:34.929 Perform nvmf subsystem discovery via RPC 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 [ 00:14:34.929 { 00:14:34.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.929 "subtype": "Discovery", 00:14:34.929 "listen_addresses": [ 00:14:34.929 { 00:14:34.929 "trtype": "TCP", 00:14:34.929 "adrfam": "IPv4", 00:14:34.929 "traddr": "10.0.0.2", 00:14:34.929 "trsvcid": "4420" 00:14:34.929 } 00:14:34.929 ], 00:14:34.929 "allow_any_host": true, 00:14:34.929 "hosts": [] 00:14:34.929 }, 00:14:34.929 { 00:14:34.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.929 "subtype": "NVMe", 00:14:34.929 "listen_addresses": [ 00:14:34.929 { 00:14:34.929 "trtype": "TCP", 00:14:34.929 "adrfam": "IPv4", 00:14:34.929 "traddr": "10.0.0.2", 00:14:34.929 "trsvcid": "4420" 00:14:34.929 } 00:14:34.929 ], 00:14:34.929 "allow_any_host": true, 00:14:34.929 "hosts": [], 00:14:34.929 "serial_number": "SPDK00000000000001", 00:14:34.929 "model_number": "SPDK bdev Controller", 00:14:34.929 "max_namespaces": 32, 00:14:34.929 "min_cntlid": 1, 00:14:34.929 "max_cntlid": 65519, 00:14:34.929 "namespaces": [ 00:14:34.929 { 00:14:34.929 "nsid": 1, 00:14:34.929 "bdev_name": "Null1", 00:14:34.929 "name": "Null1", 00:14:34.929 "nguid": "4EE0326009CE46019C41CE26531C72CE", 00:14:34.929 "uuid": "4ee03260-09ce-4601-9c41-ce26531c72ce" 00:14:34.929 } 00:14:34.929 ] 00:14:34.929 }, 00:14:34.929 { 00:14:34.929 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:34.929 "subtype": "NVMe", 00:14:34.929 "listen_addresses": [ 00:14:34.929 { 00:14:34.929 "trtype": "TCP", 00:14:34.929 "adrfam": "IPv4", 00:14:34.929 "traddr": "10.0.0.2", 00:14:34.929 "trsvcid": "4420" 00:14:34.929 } 00:14:34.929 ], 00:14:34.929 "allow_any_host": true, 00:14:34.929 "hosts": [], 00:14:34.929 "serial_number": "SPDK00000000000002", 00:14:34.929 "model_number": "SPDK bdev Controller", 00:14:34.929 "max_namespaces": 32, 00:14:34.929 "min_cntlid": 1, 00:14:34.929 "max_cntlid": 65519, 00:14:34.929 "namespaces": [ 00:14:34.929 { 00:14:34.929 "nsid": 1, 00:14:34.929 "bdev_name": "Null2", 00:14:34.929 "name": "Null2", 00:14:34.929 "nguid": "D6FB592E124B4E4486F4C4B56BA96DBD", 00:14:34.929 "uuid": "d6fb592e-124b-4e44-86f4-c4b56ba96dbd" 00:14:34.929 } 00:14:34.929 ] 00:14:34.929 }, 00:14:34.929 { 00:14:34.929 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:34.929 "subtype": "NVMe", 00:14:34.929 "listen_addresses": [ 00:14:34.929 { 00:14:34.929 "trtype": "TCP", 00:14:34.929 "adrfam": "IPv4", 00:14:34.929 "traddr": "10.0.0.2", 00:14:34.929 "trsvcid": "4420" 00:14:34.929 } 00:14:34.929 ], 00:14:34.929 "allow_any_host": true, 00:14:34.929 "hosts": [], 00:14:34.929 "serial_number": "SPDK00000000000003", 00:14:34.929 "model_number": "SPDK bdev Controller", 00:14:34.929 "max_namespaces": 32, 00:14:34.929 "min_cntlid": 1, 00:14:34.929 "max_cntlid": 65519, 00:14:34.929 "namespaces": [ 00:14:34.929 { 00:14:34.929 "nsid": 1, 00:14:34.929 "bdev_name": "Null3", 00:14:34.929 "name": "Null3", 00:14:34.929 "nguid": "B273ECAE24E44DFE920B0B9AE4B126CA", 00:14:34.929 "uuid": "b273ecae-24e4-4dfe-920b-0b9ae4b126ca" 00:14:34.929 } 00:14:34.929 ] 00:14:34.929 }, 00:14:34.929 { 00:14:34.929 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:34.929 "subtype": "NVMe", 00:14:34.929 "listen_addresses": [ 00:14:34.929 { 00:14:34.929 "trtype": "TCP", 00:14:34.929 "adrfam": "IPv4", 00:14:34.929 "traddr": "10.0.0.2", 00:14:34.929 "trsvcid": "4420" 00:14:34.929 } 00:14:34.929 ], 00:14:34.929 "allow_any_host": true, 00:14:34.929 "hosts": [], 00:14:34.929 "serial_number": "SPDK00000000000004", 00:14:34.929 "model_number": "SPDK bdev Controller", 00:14:34.929 "max_namespaces": 32, 00:14:34.929 "min_cntlid": 1, 00:14:34.929 "max_cntlid": 65519, 00:14:34.929 "namespaces": [ 00:14:34.929 { 00:14:34.929 "nsid": 1, 00:14:34.929 "bdev_name": "Null4", 00:14:34.929 "name": "Null4", 00:14:34.929 "nguid": "780BCECF450A41EDB87BE0A18439FA0A", 00:14:34.929 "uuid": "780bcecf-450a-41ed-b87b-e0a18439fa0a" 00:14:34.929 } 00:14:34.929 ] 00:14:34.929 } 00:14:34.929 ] 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:34.929 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.930 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.189 rmmod nvme_tcp 00:14:35.189 rmmod nvme_fabrics 00:14:35.189 rmmod nvme_keyring 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 86429 ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 86429 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 86429 ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 86429 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86429 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86429' 00:14:35.189 killing process with pid 86429 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 86429 00:14:35.189 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 86429 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:35.448 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.449 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.353 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.612 00:14:37.613 real 0m9.701s 00:14:37.613 user 0m8.145s 00:14:37.613 sys 0m4.680s 00:14:37.613 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:37.613 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.613 ************************************ 00:14:37.613 END TEST nvmf_target_discovery 00:14:37.613 ************************************ 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.613 ************************************ 00:14:37.613 START TEST nvmf_referrals 00:14:37.613 ************************************ 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:37.613 * Looking for test storage... 00:14:37.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.613 --rc genhtml_branch_coverage=1 00:14:37.613 --rc genhtml_function_coverage=1 00:14:37.613 --rc genhtml_legend=1 00:14:37.613 --rc geninfo_all_blocks=1 00:14:37.613 --rc geninfo_unexecuted_blocks=1 00:14:37.613 00:14:37.613 ' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.613 --rc genhtml_branch_coverage=1 00:14:37.613 --rc genhtml_function_coverage=1 00:14:37.613 --rc genhtml_legend=1 00:14:37.613 --rc geninfo_all_blocks=1 00:14:37.613 --rc geninfo_unexecuted_blocks=1 00:14:37.613 00:14:37.613 ' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.613 --rc genhtml_branch_coverage=1 00:14:37.613 --rc genhtml_function_coverage=1 00:14:37.613 --rc genhtml_legend=1 00:14:37.613 --rc geninfo_all_blocks=1 00:14:37.613 --rc geninfo_unexecuted_blocks=1 00:14:37.613 00:14:37.613 ' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.613 --rc genhtml_branch_coverage=1 00:14:37.613 --rc genhtml_function_coverage=1 00:14:37.613 --rc genhtml_legend=1 00:14:37.613 --rc geninfo_all_blocks=1 00:14:37.613 --rc geninfo_unexecuted_blocks=1 00:14:37.613 00:14:37.613 ' 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.613 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.872 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.873 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:43.147 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.147 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:43.148 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:43.148 Found net devices under 0000:af:00.0: cvl_0_0 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:43.148 Found net devices under 0000:af:00.1: cvl_0_1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:14:43.148 00:14:43.148 --- 10.0.0.2 ping statistics --- 00:14:43.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.148 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:14:43.148 00:14:43.148 --- 10.0.0.1 ping statistics --- 00:14:43.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.148 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=90592 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 90592 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 90592 ']' 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.148 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.148 [2024-11-06 12:21:14.611336] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:14:43.148 [2024-11-06 12:21:14.611394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.148 [2024-11-06 12:21:14.713102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.148 [2024-11-06 12:21:14.763314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.148 [2024-11-06 12:21:14.763357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.148 [2024-11-06 12:21:14.763368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.148 [2024-11-06 12:21:14.763377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.148 [2024-11-06 12:21:14.763384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.407 [2024-11-06 12:21:14.765343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.407 [2024-11-06 12:21:14.765434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.407 [2024-11-06 12:21:14.765568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.407 [2024-11-06 12:21:14.765571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 [2024-11-06 12:21:14.917925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 [2024-11-06 12:21:14.927612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.407 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:43.407 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.665 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.666 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:43.924 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:44.182 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:44.439 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.440 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.700 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:44.958 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:45.216 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.474 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.474 rmmod nvme_tcp 00:14:45.474 rmmod nvme_fabrics 00:14:45.474 rmmod nvme_keyring 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 90592 ']' 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 90592 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 90592 ']' 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 90592 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:45.474 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90592 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90592' 00:14:45.733 killing process with pid 90592 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 90592 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 90592 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.263 00:14:48.263 real 0m10.341s 00:14:48.263 user 0m12.450s 00:14:48.263 sys 0m4.785s 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:48.263 ************************************ 00:14:48.263 END TEST nvmf_referrals 00:14:48.263 ************************************ 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.263 ************************************ 00:14:48.263 START TEST nvmf_connect_disconnect 00:14:48.263 ************************************ 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:48.263 * Looking for test storage... 00:14:48.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.263 --rc genhtml_branch_coverage=1 00:14:48.263 --rc genhtml_function_coverage=1 00:14:48.263 --rc genhtml_legend=1 00:14:48.263 --rc geninfo_all_blocks=1 00:14:48.263 --rc geninfo_unexecuted_blocks=1 00:14:48.263 00:14:48.263 ' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.263 --rc genhtml_branch_coverage=1 00:14:48.263 --rc genhtml_function_coverage=1 00:14:48.263 --rc genhtml_legend=1 00:14:48.263 --rc geninfo_all_blocks=1 00:14:48.263 --rc geninfo_unexecuted_blocks=1 00:14:48.263 00:14:48.263 ' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.263 --rc genhtml_branch_coverage=1 00:14:48.263 --rc genhtml_function_coverage=1 00:14:48.263 --rc genhtml_legend=1 00:14:48.263 --rc geninfo_all_blocks=1 00:14:48.263 --rc geninfo_unexecuted_blocks=1 00:14:48.263 00:14:48.263 ' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:48.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.263 --rc genhtml_branch_coverage=1 00:14:48.263 --rc genhtml_function_coverage=1 00:14:48.263 --rc genhtml_legend=1 00:14:48.263 --rc geninfo_all_blocks=1 00:14:48.263 --rc geninfo_unexecuted_blocks=1 00:14:48.263 00:14:48.263 ' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.263 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:53.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:53.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:53.750 Found net devices under 0000:af:00.0: cvl_0_0 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.750 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:53.750 Found net devices under 0000:af:00.1: cvl_0_1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:53.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:14:53.751 00:14:53.751 --- 10.0.0.2 ping statistics --- 00:14:53.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.751 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:14:53.751 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:14:54.029 00:14:54.029 --- 10.0.0.1 ping statistics --- 00:14:54.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.029 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=94939 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 94939 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 94939 ']' 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:54.029 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.029 [2024-11-06 12:21:25.467605] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:14:54.029 [2024-11-06 12:21:25.467672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.029 [2024-11-06 12:21:25.569864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.029 [2024-11-06 12:21:25.618198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.029 [2024-11-06 12:21:25.618242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.029 [2024-11-06 12:21:25.618253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.029 [2024-11-06 12:21:25.618262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.029 [2024-11-06 12:21:25.618274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.029 [2024-11-06 12:21:25.620200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.029 [2024-11-06 12:21:25.620301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.029 [2024-11-06 12:21:25.620407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.029 [2024-11-06 12:21:25.620419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 [2024-11-06 12:21:25.763195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:54.288 [2024-11-06 12:21:25.836723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:54.288 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:58.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.624 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.624 rmmod nvme_tcp 00:15:11.624 rmmod nvme_fabrics 00:15:11.624 rmmod nvme_keyring 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 94939 ']' 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 94939 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 94939 ']' 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 94939 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94939 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94939' 00:15:11.624 killing process with pid 94939 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 94939 00:15:11.624 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 94939 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.883 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.787 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:13.787 00:15:13.787 real 0m25.921s 00:15:13.787 user 1m11.850s 00:15:13.787 sys 0m5.764s 00:15:13.787 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.787 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:13.787 ************************************ 00:15:13.787 END TEST nvmf_connect_disconnect 00:15:13.787 ************************************ 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.047 ************************************ 00:15:14.047 START TEST nvmf_multitarget 00:15:14.047 ************************************ 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:14.047 * Looking for test storage... 00:15:14.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.047 --rc genhtml_branch_coverage=1 00:15:14.047 --rc genhtml_function_coverage=1 00:15:14.047 --rc genhtml_legend=1 00:15:14.047 --rc geninfo_all_blocks=1 00:15:14.047 --rc geninfo_unexecuted_blocks=1 00:15:14.047 00:15:14.047 ' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.047 --rc genhtml_branch_coverage=1 00:15:14.047 --rc genhtml_function_coverage=1 00:15:14.047 --rc genhtml_legend=1 00:15:14.047 --rc geninfo_all_blocks=1 00:15:14.047 --rc geninfo_unexecuted_blocks=1 00:15:14.047 00:15:14.047 ' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.047 --rc genhtml_branch_coverage=1 00:15:14.047 --rc genhtml_function_coverage=1 00:15:14.047 --rc genhtml_legend=1 00:15:14.047 --rc geninfo_all_blocks=1 00:15:14.047 --rc geninfo_unexecuted_blocks=1 00:15:14.047 00:15:14.047 ' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.047 --rc genhtml_branch_coverage=1 00:15:14.047 --rc genhtml_function_coverage=1 00:15:14.047 --rc genhtml_legend=1 00:15:14.047 --rc geninfo_all_blocks=1 00:15:14.047 --rc geninfo_unexecuted_blocks=1 00:15:14.047 00:15:14.047 ' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.047 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:14.048 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.307 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:19.578 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:19.578 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:19.578 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:19.579 Found net devices under 0000:af:00.0: cvl_0_0 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:19.579 Found net devices under 0000:af:00.1: cvl_0_1 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.579 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:15:19.837 00:15:19.837 --- 10.0.0.2 ping statistics --- 00:15:19.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.837 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:15:19.837 00:15:19.837 --- 10.0.0.1 ping statistics --- 00:15:19.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.837 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=101770 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 101770 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 101770 ']' 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:19.837 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.837 [2024-11-06 12:21:51.438710] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:15:19.837 [2024-11-06 12:21:51.438771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.114 [2024-11-06 12:21:51.540428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.114 [2024-11-06 12:21:51.589404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.114 [2024-11-06 12:21:51.589450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.114 [2024-11-06 12:21:51.589468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.114 [2024-11-06 12:21:51.589477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.114 [2024-11-06 12:21:51.589485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.114 [2024-11-06 12:21:51.591547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.114 [2024-11-06 12:21:51.591651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.114 [2024-11-06 12:21:51.591753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.114 [2024-11-06 12:21:51.591757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.114 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:20.372 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:20.372 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:20.372 "nvmf_tgt_1" 00:15:20.372 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:20.631 "nvmf_tgt_2" 00:15:20.631 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.631 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:20.631 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:20.631 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:20.889 true 00:15:20.889 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:20.889 true 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:20.890 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:20.890 rmmod nvme_tcp 00:15:21.148 rmmod nvme_fabrics 00:15:21.149 rmmod nvme_keyring 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 101770 ']' 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 101770 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 101770 ']' 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 101770 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101770 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101770' 00:15:21.149 killing process with pid 101770 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 101770 00:15:21.149 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 101770 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.407 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.312 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:23.312 00:15:23.312 real 0m9.442s 00:15:23.312 user 0m7.320s 00:15:23.312 sys 0m4.815s 00:15:23.312 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:23.312 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:23.312 ************************************ 00:15:23.312 END TEST nvmf_multitarget 00:15:23.312 ************************************ 00:15:23.573 12:21:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:23.573 12:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:23.573 12:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:23.573 12:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.573 ************************************ 00:15:23.573 START TEST nvmf_rpc 00:15:23.573 ************************************ 00:15:23.573 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:23.573 * Looking for test storage... 00:15:23.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:23.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:23.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.574 --rc genhtml_branch_coverage=1 00:15:23.574 --rc genhtml_function_coverage=1 00:15:23.574 --rc genhtml_legend=1 00:15:23.574 --rc geninfo_all_blocks=1 00:15:23.574 --rc geninfo_unexecuted_blocks=1 00:15:23.574 00:15:23.574 ' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:23.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.574 --rc genhtml_branch_coverage=1 00:15:23.574 --rc genhtml_function_coverage=1 00:15:23.574 --rc genhtml_legend=1 00:15:23.574 --rc geninfo_all_blocks=1 00:15:23.574 --rc geninfo_unexecuted_blocks=1 00:15:23.574 00:15:23.574 ' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:23.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.574 --rc genhtml_branch_coverage=1 00:15:23.574 --rc genhtml_function_coverage=1 00:15:23.574 --rc genhtml_legend=1 00:15:23.574 --rc geninfo_all_blocks=1 00:15:23.574 --rc geninfo_unexecuted_blocks=1 00:15:23.574 00:15:23.574 ' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:23.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.574 --rc genhtml_branch_coverage=1 00:15:23.574 --rc genhtml_function_coverage=1 00:15:23.574 --rc genhtml_legend=1 00:15:23.574 --rc geninfo_all_blocks=1 00:15:23.574 --rc geninfo_unexecuted_blocks=1 00:15:23.574 00:15:23.574 ' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:23.574 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:28.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.842 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:28.843 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:28.843 Found net devices under 0000:af:00.0: cvl_0_0 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:28.843 Found net devices under 0000:af:00.1: cvl_0_1 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.843 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:29.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:15:29.101 00:15:29.101 --- 10.0.0.2 ping statistics --- 00:15:29.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.101 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:15:29.101 00:15:29.101 --- 10.0.0.1 ping statistics --- 00:15:29.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.101 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.101 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=105675 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 105675 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 105675 ']' 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.359 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.359 [2024-11-06 12:22:00.786367] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:15:29.359 [2024-11-06 12:22:00.786426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.359 [2024-11-06 12:22:00.887068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.359 [2024-11-06 12:22:00.936995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.359 [2024-11-06 12:22:00.937037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.359 [2024-11-06 12:22:00.937048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.359 [2024-11-06 12:22:00.937057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.359 [2024-11-06 12:22:00.937064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.359 [2024-11-06 12:22:00.939122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.359 [2024-11-06 12:22:00.939226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.359 [2024-11-06 12:22:00.939338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.359 [2024-11-06 12:22:00.939339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:29.617 "tick_rate": 2200000000, 00:15:29.617 "poll_groups": [ 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_000", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_001", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_002", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_003", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [] 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 }' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.617 [2024-11-06 12:22:01.199591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:29.617 "tick_rate": 2200000000, 00:15:29.617 "poll_groups": [ 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_000", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [ 00:15:29.617 { 00:15:29.617 "trtype": "TCP" 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_001", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [ 00:15:29.617 { 00:15:29.617 "trtype": "TCP" 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_002", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [ 00:15:29.617 { 00:15:29.617 "trtype": "TCP" 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 }, 00:15:29.617 { 00:15:29.617 "name": "nvmf_tgt_poll_group_003", 00:15:29.617 "admin_qpairs": 0, 00:15:29.617 "io_qpairs": 0, 00:15:29.617 "current_admin_qpairs": 0, 00:15:29.617 "current_io_qpairs": 0, 00:15:29.617 "pending_bdev_io": 0, 00:15:29.617 "completed_nvme_io": 0, 00:15:29.617 "transports": [ 00:15:29.617 { 00:15:29.617 "trtype": "TCP" 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 } 00:15:29.617 ] 00:15:29.617 }' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:29.617 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.875 Malloc1 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.875 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.876 [2024-11-06 12:22:01.391388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:29.876 [2024-11-06 12:22:01.424040] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:15:29.876 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:29.876 could not add new controller: failed to write to nvme-fabrics device 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.876 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:31.248 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:31.248 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:31.248 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.248 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:31.248 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:33.146 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.404 [2024-11-06 12:22:04.926523] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:15:33.404 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:33.404 could not add new controller: failed to write to nvme-fabrics device 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.404 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.778 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.778 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:34.778 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.778 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:34.778 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 [2024-11-06 12:22:08.477262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.245 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.245 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:38.245 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.245 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:38.245 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:40.773 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:40.773 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:40.773 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.773 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 [2024-11-06 12:22:11.933900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.774 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.705 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:41.706 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:41.706 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.706 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:41.706 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 [2024-11-06 12:22:15.438116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.232 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.163 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:45.163 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:45.163 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.163 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:45.163 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 [2024-11-06 12:22:18.928307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.690 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:49.061 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:49.061 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:49.061 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.061 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:49.061 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 [2024-11-06 12:22:22.518276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.959 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.330 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.330 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:52.330 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.330 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:52.330 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:54.238 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 [2024-11-06 12:22:26.018247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 [2024-11-06 12:22:26.066369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.498 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.498 [2024-11-06 12:22:26.114506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 [2024-11-06 12:22:26.162673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 [2024-11-06 12:22:26.210867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.757 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:54.757 "tick_rate": 2200000000, 00:15:54.757 "poll_groups": [ 00:15:54.757 { 00:15:54.757 "name": "nvmf_tgt_poll_group_000", 00:15:54.757 "admin_qpairs": 2, 00:15:54.757 "io_qpairs": 196, 00:15:54.757 "current_admin_qpairs": 0, 00:15:54.757 "current_io_qpairs": 0, 00:15:54.757 "pending_bdev_io": 0, 00:15:54.757 "completed_nvme_io": 346, 00:15:54.757 "transports": [ 00:15:54.757 { 00:15:54.757 "trtype": "TCP" 00:15:54.757 } 00:15:54.757 ] 00:15:54.757 }, 00:15:54.757 { 00:15:54.757 "name": "nvmf_tgt_poll_group_001", 00:15:54.757 "admin_qpairs": 2, 00:15:54.757 "io_qpairs": 196, 00:15:54.757 "current_admin_qpairs": 0, 00:15:54.757 "current_io_qpairs": 0, 00:15:54.757 "pending_bdev_io": 0, 00:15:54.757 "completed_nvme_io": 248, 00:15:54.757 "transports": [ 00:15:54.757 { 00:15:54.757 "trtype": "TCP" 00:15:54.757 } 00:15:54.757 ] 00:15:54.757 }, 00:15:54.757 { 00:15:54.757 "name": "nvmf_tgt_poll_group_002", 00:15:54.757 "admin_qpairs": 1, 00:15:54.757 "io_qpairs": 196, 00:15:54.757 "current_admin_qpairs": 0, 00:15:54.758 "current_io_qpairs": 0, 00:15:54.758 "pending_bdev_io": 0, 00:15:54.758 "completed_nvme_io": 262, 00:15:54.758 "transports": [ 00:15:54.758 { 00:15:54.758 "trtype": "TCP" 00:15:54.758 } 00:15:54.758 ] 00:15:54.758 }, 00:15:54.758 { 00:15:54.758 "name": "nvmf_tgt_poll_group_003", 00:15:54.758 "admin_qpairs": 2, 00:15:54.758 "io_qpairs": 196, 00:15:54.758 "current_admin_qpairs": 0, 00:15:54.758 "current_io_qpairs": 0, 00:15:54.758 "pending_bdev_io": 0, 00:15:54.758 "completed_nvme_io": 278, 00:15:54.758 "transports": [ 00:15:54.758 { 00:15:54.758 "trtype": "TCP" 00:15:54.758 } 00:15:54.758 ] 00:15:54.758 } 00:15:54.758 ] 00:15:54.758 }' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.758 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.758 rmmod nvme_tcp 00:15:55.016 rmmod nvme_fabrics 00:15:55.016 rmmod nvme_keyring 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 105675 ']' 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 105675 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 105675 ']' 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 105675 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 105675 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 105675' 00:15:55.016 killing process with pid 105675 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 105675 00:15:55.016 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 105675 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.274 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.176 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:57.176 00:15:57.176 real 0m33.797s 00:15:57.176 user 1m44.152s 00:15:57.176 sys 0m6.281s 00:15:57.176 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:57.176 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 ************************************ 00:15:57.176 END TEST nvmf_rpc 00:15:57.176 ************************************ 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.436 ************************************ 00:15:57.436 START TEST nvmf_invalid 00:15:57.436 ************************************ 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:57.436 * Looking for test storage... 00:15:57.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:15:57.436 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.436 --rc genhtml_branch_coverage=1 00:15:57.436 --rc genhtml_function_coverage=1 00:15:57.436 --rc genhtml_legend=1 00:15:57.436 --rc geninfo_all_blocks=1 00:15:57.436 --rc geninfo_unexecuted_blocks=1 00:15:57.436 00:15:57.436 ' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.436 --rc genhtml_branch_coverage=1 00:15:57.436 --rc genhtml_function_coverage=1 00:15:57.436 --rc genhtml_legend=1 00:15:57.436 --rc geninfo_all_blocks=1 00:15:57.436 --rc geninfo_unexecuted_blocks=1 00:15:57.436 00:15:57.436 ' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.436 --rc genhtml_branch_coverage=1 00:15:57.436 --rc genhtml_function_coverage=1 00:15:57.436 --rc genhtml_legend=1 00:15:57.436 --rc geninfo_all_blocks=1 00:15:57.436 --rc geninfo_unexecuted_blocks=1 00:15:57.436 00:15:57.436 ' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:57.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.436 --rc genhtml_branch_coverage=1 00:15:57.436 --rc genhtml_function_coverage=1 00:15:57.436 --rc genhtml_legend=1 00:15:57.436 --rc geninfo_all_blocks=1 00:15:57.436 --rc geninfo_unexecuted_blocks=1 00:15:57.436 00:15:57.436 ' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.436 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.437 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.695 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:57.695 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:57.695 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:57.695 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.038 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.039 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.039 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.039 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:16:03.324 00:16:03.324 --- 10.0.0.2 ping statistics --- 00:16:03.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.324 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:03.324 00:16:03.324 --- 10.0.0.1 ping statistics --- 00:16:03.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.324 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:03.324 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=114030 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 114030 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 114030 ']' 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:03.325 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:03.325 [2024-11-06 12:22:34.880561] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:03.325 [2024-11-06 12:22:34.880617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.612 [2024-11-06 12:22:34.982569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.612 [2024-11-06 12:22:35.032594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.612 [2024-11-06 12:22:35.032634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.612 [2024-11-06 12:22:35.032645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.612 [2024-11-06 12:22:35.032654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.612 [2024-11-06 12:22:35.032661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.612 [2024-11-06 12:22:35.034614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.612 [2024-11-06 12:22:35.034720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.612 [2024-11-06 12:22:35.034824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.612 [2024-11-06 12:22:35.034829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:03.612 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10971 00:16:03.870 [2024-11-06 12:22:35.435582] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:03.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:03.870 { 00:16:03.870 "nqn": "nqn.2016-06.io.spdk:cnode10971", 00:16:03.870 "tgt_name": "foobar", 00:16:03.870 "method": "nvmf_create_subsystem", 00:16:03.870 "req_id": 1 00:16:03.870 } 00:16:03.870 Got JSON-RPC error response 00:16:03.870 response: 00:16:03.870 { 00:16:03.870 "code": -32603, 00:16:03.870 "message": "Unable to find target foobar" 00:16:03.870 }' 00:16:03.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:03.870 { 00:16:03.870 "nqn": "nqn.2016-06.io.spdk:cnode10971", 00:16:03.870 "tgt_name": "foobar", 00:16:03.870 "method": "nvmf_create_subsystem", 00:16:03.870 "req_id": 1 00:16:03.870 } 00:16:03.870 Got JSON-RPC error response 00:16:03.870 response: 00:16:03.870 { 00:16:03.870 "code": -32603, 00:16:03.870 "message": "Unable to find target foobar" 00:16:03.870 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:03.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:03.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18302 00:16:04.128 [2024-11-06 12:22:35.712565] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18302: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:04.128 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:04.128 { 00:16:04.128 "nqn": "nqn.2016-06.io.spdk:cnode18302", 00:16:04.128 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:04.128 "method": "nvmf_create_subsystem", 00:16:04.128 "req_id": 1 00:16:04.128 } 00:16:04.128 Got JSON-RPC error response 00:16:04.128 response: 00:16:04.128 { 00:16:04.128 "code": -32602, 00:16:04.128 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:04.128 }' 00:16:04.128 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:04.128 { 00:16:04.128 "nqn": "nqn.2016-06.io.spdk:cnode18302", 00:16:04.128 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:04.128 "method": "nvmf_create_subsystem", 00:16:04.128 "req_id": 1 00:16:04.128 } 00:16:04.128 Got JSON-RPC error response 00:16:04.128 response: 00:16:04.128 { 00:16:04.128 "code": -32602, 00:16:04.128 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:04.128 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:04.128 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:04.128 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14945 00:16:04.387 [2024-11-06 12:22:35.989547] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14945: invalid model number 'SPDK_Controller' 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:04.646 { 00:16:04.646 "nqn": "nqn.2016-06.io.spdk:cnode14945", 00:16:04.646 "model_number": "SPDK_Controller\u001f", 00:16:04.646 "method": "nvmf_create_subsystem", 00:16:04.646 "req_id": 1 00:16:04.646 } 00:16:04.646 Got JSON-RPC error response 00:16:04.646 response: 00:16:04.646 { 00:16:04.646 "code": -32602, 00:16:04.646 "message": "Invalid MN SPDK_Controller\u001f" 00:16:04.646 }' 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:04.646 { 00:16:04.646 "nqn": "nqn.2016-06.io.spdk:cnode14945", 00:16:04.646 "model_number": "SPDK_Controller\u001f", 00:16:04.646 "method": "nvmf_create_subsystem", 00:16:04.646 "req_id": 1 00:16:04.646 } 00:16:04.646 Got JSON-RPC error response 00:16:04.646 response: 00:16:04.646 { 00:16:04.646 "code": -32602, 00:16:04.646 "message": "Invalid MN SPDK_Controller\u001f" 00:16:04.646 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.646 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:16:04.647 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fMrl6eW$|U MF /dev/null' 00:16:08.018 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:10.555 00:16:10.555 real 0m12.829s 00:16:10.555 user 0m23.153s 00:16:10.555 sys 0m5.448s 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:10.555 ************************************ 00:16:10.555 END TEST nvmf_invalid 00:16:10.555 ************************************ 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:10.555 ************************************ 00:16:10.555 START TEST nvmf_connect_stress 00:16:10.555 ************************************ 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:10.555 * Looking for test storage... 00:16:10.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:10.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.555 --rc genhtml_branch_coverage=1 00:16:10.555 --rc genhtml_function_coverage=1 00:16:10.555 --rc genhtml_legend=1 00:16:10.555 --rc geninfo_all_blocks=1 00:16:10.555 --rc geninfo_unexecuted_blocks=1 00:16:10.555 00:16:10.555 ' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:10.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.555 --rc genhtml_branch_coverage=1 00:16:10.555 --rc genhtml_function_coverage=1 00:16:10.555 --rc genhtml_legend=1 00:16:10.555 --rc geninfo_all_blocks=1 00:16:10.555 --rc geninfo_unexecuted_blocks=1 00:16:10.555 00:16:10.555 ' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:10.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.555 --rc genhtml_branch_coverage=1 00:16:10.555 --rc genhtml_function_coverage=1 00:16:10.555 --rc genhtml_legend=1 00:16:10.555 --rc geninfo_all_blocks=1 00:16:10.555 --rc geninfo_unexecuted_blocks=1 00:16:10.555 00:16:10.555 ' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:10.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.555 --rc genhtml_branch_coverage=1 00:16:10.555 --rc genhtml_function_coverage=1 00:16:10.555 --rc genhtml_legend=1 00:16:10.555 --rc geninfo_all_blocks=1 00:16:10.555 --rc geninfo_unexecuted_blocks=1 00:16:10.555 00:16:10.555 ' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.555 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:10.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:10.556 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:15.830 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:15.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:15.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:15.831 Found net devices under 0000:af:00.0: cvl_0_0 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:15.831 Found net devices under 0000:af:00.1: cvl_0_1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:15.831 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:16:16.091 00:16:16.091 --- 10.0.0.2 ping statistics --- 00:16:16.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.091 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:16.091 00:16:16.091 --- 10.0.0.1 ping statistics --- 00:16:16.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.091 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=118720 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 118720 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 118720 ']' 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.091 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 [2024-11-06 12:22:47.565309] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:16.091 [2024-11-06 12:22:47.565348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.091 [2024-11-06 12:22:47.622196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:16.091 [2024-11-06 12:22:47.663382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.091 [2024-11-06 12:22:47.663416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.091 [2024-11-06 12:22:47.663423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.091 [2024-11-06 12:22:47.663429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.091 [2024-11-06 12:22:47.663433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.091 [2024-11-06 12:22:47.664848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.091 [2024-11-06 12:22:47.664943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.091 [2024-11-06 12:22:47.664944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.351 [2024-11-06 12:22:47.863646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.351 [2024-11-06 12:22:47.883870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.351 NULL1 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=118743 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.351 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.352 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.611 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.870 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:16.870 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.870 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.870 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.129 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.129 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:17.129 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.129 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.129 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.388 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:17.388 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.388 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.388 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.955 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.955 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:17.955 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.955 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.955 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.213 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.213 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:18.213 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.213 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.213 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.471 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.471 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:18.471 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.471 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.471 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.730 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.730 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:18.730 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.730 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.730 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.989 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.989 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:18.989 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.989 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.989 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.557 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.557 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:19.557 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.557 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.557 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.818 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.819 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:19.819 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.819 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.819 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.080 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.080 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:20.080 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.080 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.080 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.339 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.339 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:20.339 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.339 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.339 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.598 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.598 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:20.598 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.598 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.598 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.166 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.166 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:21.166 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.166 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.166 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.424 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.424 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:21.424 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.424 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.424 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.683 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.683 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:21.684 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.684 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.684 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.942 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.942 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:21.942 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.942 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.942 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.201 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.201 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:22.201 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.201 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.201 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.769 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:22.769 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.769 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.769 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.027 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.027 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:23.027 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.027 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.027 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.286 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.286 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:23.286 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.286 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.286 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.545 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.545 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:23.545 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.545 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.545 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.804 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.804 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:23.804 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.804 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.804 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.372 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.372 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:24.372 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.372 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.372 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.631 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.631 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:24.631 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.631 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.631 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.890 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.890 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:24.890 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.890 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.890 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.148 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.148 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:25.148 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.148 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.148 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.407 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.407 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:25.407 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.407 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.407 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.976 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.976 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:25.976 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.976 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.976 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.235 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.235 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:26.235 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.235 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.235 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.492 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.492 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:26.492 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.492 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.492 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.492 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 118743 00:16:26.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (118743) - No such process 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 118743 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.751 rmmod nvme_tcp 00:16:26.751 rmmod nvme_fabrics 00:16:26.751 rmmod nvme_keyring 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 118720 ']' 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 118720 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 118720 ']' 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 118720 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:26.751 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 118720 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 118720' 00:16:27.010 killing process with pid 118720 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 118720 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 118720 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.010 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:29.546 00:16:29.546 real 0m18.907s 00:16:29.546 user 0m40.712s 00:16:29.546 sys 0m7.855s 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.546 ************************************ 00:16:29.546 END TEST nvmf_connect_stress 00:16:29.546 ************************************ 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.546 ************************************ 00:16:29.546 START TEST nvmf_fused_ordering 00:16:29.546 ************************************ 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:29.546 * Looking for test storage... 00:16:29.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.546 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:29.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.547 --rc genhtml_branch_coverage=1 00:16:29.547 --rc genhtml_function_coverage=1 00:16:29.547 --rc genhtml_legend=1 00:16:29.547 --rc geninfo_all_blocks=1 00:16:29.547 --rc geninfo_unexecuted_blocks=1 00:16:29.547 00:16:29.547 ' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:29.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.547 --rc genhtml_branch_coverage=1 00:16:29.547 --rc genhtml_function_coverage=1 00:16:29.547 --rc genhtml_legend=1 00:16:29.547 --rc geninfo_all_blocks=1 00:16:29.547 --rc geninfo_unexecuted_blocks=1 00:16:29.547 00:16:29.547 ' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:29.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.547 --rc genhtml_branch_coverage=1 00:16:29.547 --rc genhtml_function_coverage=1 00:16:29.547 --rc genhtml_legend=1 00:16:29.547 --rc geninfo_all_blocks=1 00:16:29.547 --rc geninfo_unexecuted_blocks=1 00:16:29.547 00:16:29.547 ' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:29.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.547 --rc genhtml_branch_coverage=1 00:16:29.547 --rc genhtml_function_coverage=1 00:16:29.547 --rc genhtml_legend=1 00:16:29.547 --rc geninfo_all_blocks=1 00:16:29.547 --rc geninfo_unexecuted_blocks=1 00:16:29.547 00:16:29.547 ' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:29.547 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.819 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.819 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.820 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.820 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.820 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:35.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:16:35.079 00:16:35.079 --- 10.0.0.2 ping statistics --- 00:16:35.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.079 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:16:35.079 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:16:35.079 00:16:35.079 --- 10.0.0.1 ping statistics --- 00:16:35.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.079 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:16:35.079 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.079 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:35.079 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=124315 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 124315 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 124315 ']' 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.080 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.080 [2024-11-06 12:23:06.558069] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:35.080 [2024-11-06 12:23:06.558135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.080 [2024-11-06 12:23:06.630708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.080 [2024-11-06 12:23:06.667334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.080 [2024-11-06 12:23:06.667368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.080 [2024-11-06 12:23:06.667375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.080 [2024-11-06 12:23:06.667380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.080 [2024-11-06 12:23:06.667388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.080 [2024-11-06 12:23:06.667948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 [2024-11-06 12:23:06.820163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 [2024-11-06 12:23:06.836370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 NULL1 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.339 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:35.339 [2024-11-06 12:23:06.875320] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:35.339 [2024-11-06 12:23:06.875342] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124344 ] 00:16:35.907 Attached to nqn.2016-06.io.spdk:cnode1 00:16:35.907 Namespace ID: 1 size: 1GB 00:16:35.907 fused_ordering(0) 00:16:35.907 fused_ordering(1) 00:16:35.907 fused_ordering(2) 00:16:35.907 fused_ordering(3) 00:16:35.907 fused_ordering(4) 00:16:35.907 fused_ordering(5) 00:16:35.907 fused_ordering(6) 00:16:35.907 fused_ordering(7) 00:16:35.907 fused_ordering(8) 00:16:35.907 fused_ordering(9) 00:16:35.907 fused_ordering(10) 00:16:35.907 fused_ordering(11) 00:16:35.907 fused_ordering(12) 00:16:35.907 fused_ordering(13) 00:16:35.907 fused_ordering(14) 00:16:35.907 fused_ordering(15) 00:16:35.907 fused_ordering(16) 00:16:35.907 fused_ordering(17) 00:16:35.907 fused_ordering(18) 00:16:35.907 fused_ordering(19) 00:16:35.907 fused_ordering(20) 00:16:35.907 fused_ordering(21) 00:16:35.907 fused_ordering(22) 00:16:35.907 fused_ordering(23) 00:16:35.907 fused_ordering(24) 00:16:35.907 fused_ordering(25) 00:16:35.907 fused_ordering(26) 00:16:35.907 fused_ordering(27) 00:16:35.907 fused_ordering(28) 00:16:35.907 fused_ordering(29) 00:16:35.907 fused_ordering(30) 00:16:35.907 fused_ordering(31) 00:16:35.907 fused_ordering(32) 00:16:35.907 fused_ordering(33) 00:16:35.907 fused_ordering(34) 00:16:35.907 fused_ordering(35) 00:16:35.907 fused_ordering(36) 00:16:35.907 fused_ordering(37) 00:16:35.907 fused_ordering(38) 00:16:35.907 fused_ordering(39) 00:16:35.907 fused_ordering(40) 00:16:35.907 fused_ordering(41) 00:16:35.907 fused_ordering(42) 00:16:35.907 fused_ordering(43) 00:16:35.907 fused_ordering(44) 00:16:35.907 fused_ordering(45) 00:16:35.907 fused_ordering(46) 00:16:35.907 fused_ordering(47) 00:16:35.907 fused_ordering(48) 00:16:35.907 fused_ordering(49) 00:16:35.907 fused_ordering(50) 00:16:35.907 fused_ordering(51) 00:16:35.907 fused_ordering(52) 00:16:35.907 fused_ordering(53) 00:16:35.907 fused_ordering(54) 00:16:35.907 fused_ordering(55) 00:16:35.907 fused_ordering(56) 00:16:35.907 fused_ordering(57) 00:16:35.907 fused_ordering(58) 00:16:35.907 fused_ordering(59) 00:16:35.907 fused_ordering(60) 00:16:35.907 fused_ordering(61) 00:16:35.907 fused_ordering(62) 00:16:35.907 fused_ordering(63) 00:16:35.907 fused_ordering(64) 00:16:35.907 fused_ordering(65) 00:16:35.907 fused_ordering(66) 00:16:35.907 fused_ordering(67) 00:16:35.907 fused_ordering(68) 00:16:35.907 fused_ordering(69) 00:16:35.907 fused_ordering(70) 00:16:35.907 fused_ordering(71) 00:16:35.907 fused_ordering(72) 00:16:35.907 fused_ordering(73) 00:16:35.907 fused_ordering(74) 00:16:35.907 fused_ordering(75) 00:16:35.907 fused_ordering(76) 00:16:35.907 fused_ordering(77) 00:16:35.907 fused_ordering(78) 00:16:35.907 fused_ordering(79) 00:16:35.907 fused_ordering(80) 00:16:35.907 fused_ordering(81) 00:16:35.907 fused_ordering(82) 00:16:35.907 fused_ordering(83) 00:16:35.907 fused_ordering(84) 00:16:35.907 fused_ordering(85) 00:16:35.907 fused_ordering(86) 00:16:35.907 fused_ordering(87) 00:16:35.907 fused_ordering(88) 00:16:35.907 fused_ordering(89) 00:16:35.907 fused_ordering(90) 00:16:35.907 fused_ordering(91) 00:16:35.907 fused_ordering(92) 00:16:35.907 fused_ordering(93) 00:16:35.907 fused_ordering(94) 00:16:35.907 fused_ordering(95) 00:16:35.907 fused_ordering(96) 00:16:35.907 fused_ordering(97) 00:16:35.907 fused_ordering(98) 00:16:35.907 fused_ordering(99) 00:16:35.907 fused_ordering(100) 00:16:35.907 fused_ordering(101) 00:16:35.907 fused_ordering(102) 00:16:35.907 fused_ordering(103) 00:16:35.907 fused_ordering(104) 00:16:35.907 fused_ordering(105) 00:16:35.907 fused_ordering(106) 00:16:35.907 fused_ordering(107) 00:16:35.907 fused_ordering(108) 00:16:35.907 fused_ordering(109) 00:16:35.907 fused_ordering(110) 00:16:35.907 fused_ordering(111) 00:16:35.907 fused_ordering(112) 00:16:35.907 fused_ordering(113) 00:16:35.907 fused_ordering(114) 00:16:35.907 fused_ordering(115) 00:16:35.907 fused_ordering(116) 00:16:35.907 fused_ordering(117) 00:16:35.907 fused_ordering(118) 00:16:35.907 fused_ordering(119) 00:16:35.907 fused_ordering(120) 00:16:35.907 fused_ordering(121) 00:16:35.907 fused_ordering(122) 00:16:35.907 fused_ordering(123) 00:16:35.907 fused_ordering(124) 00:16:35.907 fused_ordering(125) 00:16:35.907 fused_ordering(126) 00:16:35.907 fused_ordering(127) 00:16:35.907 fused_ordering(128) 00:16:35.907 fused_ordering(129) 00:16:35.907 fused_ordering(130) 00:16:35.907 fused_ordering(131) 00:16:35.907 fused_ordering(132) 00:16:35.907 fused_ordering(133) 00:16:35.907 fused_ordering(134) 00:16:35.907 fused_ordering(135) 00:16:35.907 fused_ordering(136) 00:16:35.907 fused_ordering(137) 00:16:35.907 fused_ordering(138) 00:16:35.907 fused_ordering(139) 00:16:35.907 fused_ordering(140) 00:16:35.907 fused_ordering(141) 00:16:35.907 fused_ordering(142) 00:16:35.907 fused_ordering(143) 00:16:35.907 fused_ordering(144) 00:16:35.907 fused_ordering(145) 00:16:35.907 fused_ordering(146) 00:16:35.907 fused_ordering(147) 00:16:35.907 fused_ordering(148) 00:16:35.907 fused_ordering(149) 00:16:35.907 fused_ordering(150) 00:16:35.907 fused_ordering(151) 00:16:35.907 fused_ordering(152) 00:16:35.907 fused_ordering(153) 00:16:35.907 fused_ordering(154) 00:16:35.907 fused_ordering(155) 00:16:35.907 fused_ordering(156) 00:16:35.907 fused_ordering(157) 00:16:35.907 fused_ordering(158) 00:16:35.907 fused_ordering(159) 00:16:35.907 fused_ordering(160) 00:16:35.907 fused_ordering(161) 00:16:35.907 fused_ordering(162) 00:16:35.907 fused_ordering(163) 00:16:35.907 fused_ordering(164) 00:16:35.907 fused_ordering(165) 00:16:35.907 fused_ordering(166) 00:16:35.907 fused_ordering(167) 00:16:35.907 fused_ordering(168) 00:16:35.907 fused_ordering(169) 00:16:35.907 fused_ordering(170) 00:16:35.907 fused_ordering(171) 00:16:35.907 fused_ordering(172) 00:16:35.907 fused_ordering(173) 00:16:35.907 fused_ordering(174) 00:16:35.907 fused_ordering(175) 00:16:35.907 fused_ordering(176) 00:16:35.907 fused_ordering(177) 00:16:35.907 fused_ordering(178) 00:16:35.907 fused_ordering(179) 00:16:35.907 fused_ordering(180) 00:16:35.907 fused_ordering(181) 00:16:35.907 fused_ordering(182) 00:16:35.907 fused_ordering(183) 00:16:35.907 fused_ordering(184) 00:16:35.907 fused_ordering(185) 00:16:35.907 fused_ordering(186) 00:16:35.907 fused_ordering(187) 00:16:35.907 fused_ordering(188) 00:16:35.907 fused_ordering(189) 00:16:35.907 fused_ordering(190) 00:16:35.907 fused_ordering(191) 00:16:35.907 fused_ordering(192) 00:16:35.907 fused_ordering(193) 00:16:35.907 fused_ordering(194) 00:16:35.907 fused_ordering(195) 00:16:35.907 fused_ordering(196) 00:16:35.907 fused_ordering(197) 00:16:35.907 fused_ordering(198) 00:16:35.907 fused_ordering(199) 00:16:35.907 fused_ordering(200) 00:16:35.907 fused_ordering(201) 00:16:35.907 fused_ordering(202) 00:16:35.907 fused_ordering(203) 00:16:35.907 fused_ordering(204) 00:16:35.907 fused_ordering(205) 00:16:36.167 fused_ordering(206) 00:16:36.167 fused_ordering(207) 00:16:36.167 fused_ordering(208) 00:16:36.167 fused_ordering(209) 00:16:36.167 fused_ordering(210) 00:16:36.167 fused_ordering(211) 00:16:36.167 fused_ordering(212) 00:16:36.167 fused_ordering(213) 00:16:36.167 fused_ordering(214) 00:16:36.167 fused_ordering(215) 00:16:36.167 fused_ordering(216) 00:16:36.167 fused_ordering(217) 00:16:36.167 fused_ordering(218) 00:16:36.167 fused_ordering(219) 00:16:36.167 fused_ordering(220) 00:16:36.167 fused_ordering(221) 00:16:36.167 fused_ordering(222) 00:16:36.167 fused_ordering(223) 00:16:36.167 fused_ordering(224) 00:16:36.167 fused_ordering(225) 00:16:36.167 fused_ordering(226) 00:16:36.167 fused_ordering(227) 00:16:36.167 fused_ordering(228) 00:16:36.167 fused_ordering(229) 00:16:36.167 fused_ordering(230) 00:16:36.167 fused_ordering(231) 00:16:36.167 fused_ordering(232) 00:16:36.167 fused_ordering(233) 00:16:36.167 fused_ordering(234) 00:16:36.167 fused_ordering(235) 00:16:36.167 fused_ordering(236) 00:16:36.167 fused_ordering(237) 00:16:36.167 fused_ordering(238) 00:16:36.167 fused_ordering(239) 00:16:36.167 fused_ordering(240) 00:16:36.167 fused_ordering(241) 00:16:36.167 fused_ordering(242) 00:16:36.167 fused_ordering(243) 00:16:36.167 fused_ordering(244) 00:16:36.167 fused_ordering(245) 00:16:36.167 fused_ordering(246) 00:16:36.167 fused_ordering(247) 00:16:36.167 fused_ordering(248) 00:16:36.167 fused_ordering(249) 00:16:36.167 fused_ordering(250) 00:16:36.167 fused_ordering(251) 00:16:36.167 fused_ordering(252) 00:16:36.167 fused_ordering(253) 00:16:36.167 fused_ordering(254) 00:16:36.167 fused_ordering(255) 00:16:36.167 fused_ordering(256) 00:16:36.167 fused_ordering(257) 00:16:36.167 fused_ordering(258) 00:16:36.167 fused_ordering(259) 00:16:36.167 fused_ordering(260) 00:16:36.167 fused_ordering(261) 00:16:36.167 fused_ordering(262) 00:16:36.167 fused_ordering(263) 00:16:36.167 fused_ordering(264) 00:16:36.167 fused_ordering(265) 00:16:36.167 fused_ordering(266) 00:16:36.167 fused_ordering(267) 00:16:36.167 fused_ordering(268) 00:16:36.167 fused_ordering(269) 00:16:36.167 fused_ordering(270) 00:16:36.167 fused_ordering(271) 00:16:36.167 fused_ordering(272) 00:16:36.167 fused_ordering(273) 00:16:36.167 fused_ordering(274) 00:16:36.167 fused_ordering(275) 00:16:36.167 fused_ordering(276) 00:16:36.167 fused_ordering(277) 00:16:36.167 fused_ordering(278) 00:16:36.167 fused_ordering(279) 00:16:36.167 fused_ordering(280) 00:16:36.167 fused_ordering(281) 00:16:36.167 fused_ordering(282) 00:16:36.167 fused_ordering(283) 00:16:36.167 fused_ordering(284) 00:16:36.167 fused_ordering(285) 00:16:36.167 fused_ordering(286) 00:16:36.167 fused_ordering(287) 00:16:36.167 fused_ordering(288) 00:16:36.167 fused_ordering(289) 00:16:36.167 fused_ordering(290) 00:16:36.167 fused_ordering(291) 00:16:36.167 fused_ordering(292) 00:16:36.167 fused_ordering(293) 00:16:36.167 fused_ordering(294) 00:16:36.167 fused_ordering(295) 00:16:36.167 fused_ordering(296) 00:16:36.167 fused_ordering(297) 00:16:36.167 fused_ordering(298) 00:16:36.167 fused_ordering(299) 00:16:36.167 fused_ordering(300) 00:16:36.167 fused_ordering(301) 00:16:36.167 fused_ordering(302) 00:16:36.167 fused_ordering(303) 00:16:36.167 fused_ordering(304) 00:16:36.167 fused_ordering(305) 00:16:36.167 fused_ordering(306) 00:16:36.167 fused_ordering(307) 00:16:36.167 fused_ordering(308) 00:16:36.167 fused_ordering(309) 00:16:36.167 fused_ordering(310) 00:16:36.167 fused_ordering(311) 00:16:36.167 fused_ordering(312) 00:16:36.167 fused_ordering(313) 00:16:36.167 fused_ordering(314) 00:16:36.167 fused_ordering(315) 00:16:36.167 fused_ordering(316) 00:16:36.167 fused_ordering(317) 00:16:36.167 fused_ordering(318) 00:16:36.167 fused_ordering(319) 00:16:36.167 fused_ordering(320) 00:16:36.167 fused_ordering(321) 00:16:36.167 fused_ordering(322) 00:16:36.167 fused_ordering(323) 00:16:36.167 fused_ordering(324) 00:16:36.167 fused_ordering(325) 00:16:36.167 fused_ordering(326) 00:16:36.167 fused_ordering(327) 00:16:36.167 fused_ordering(328) 00:16:36.167 fused_ordering(329) 00:16:36.167 fused_ordering(330) 00:16:36.167 fused_ordering(331) 00:16:36.167 fused_ordering(332) 00:16:36.167 fused_ordering(333) 00:16:36.167 fused_ordering(334) 00:16:36.167 fused_ordering(335) 00:16:36.167 fused_ordering(336) 00:16:36.167 fused_ordering(337) 00:16:36.167 fused_ordering(338) 00:16:36.167 fused_ordering(339) 00:16:36.167 fused_ordering(340) 00:16:36.167 fused_ordering(341) 00:16:36.167 fused_ordering(342) 00:16:36.167 fused_ordering(343) 00:16:36.167 fused_ordering(344) 00:16:36.167 fused_ordering(345) 00:16:36.167 fused_ordering(346) 00:16:36.167 fused_ordering(347) 00:16:36.167 fused_ordering(348) 00:16:36.167 fused_ordering(349) 00:16:36.167 fused_ordering(350) 00:16:36.167 fused_ordering(351) 00:16:36.167 fused_ordering(352) 00:16:36.167 fused_ordering(353) 00:16:36.167 fused_ordering(354) 00:16:36.167 fused_ordering(355) 00:16:36.167 fused_ordering(356) 00:16:36.167 fused_ordering(357) 00:16:36.167 fused_ordering(358) 00:16:36.167 fused_ordering(359) 00:16:36.167 fused_ordering(360) 00:16:36.167 fused_ordering(361) 00:16:36.167 fused_ordering(362) 00:16:36.167 fused_ordering(363) 00:16:36.167 fused_ordering(364) 00:16:36.167 fused_ordering(365) 00:16:36.167 fused_ordering(366) 00:16:36.167 fused_ordering(367) 00:16:36.167 fused_ordering(368) 00:16:36.167 fused_ordering(369) 00:16:36.167 fused_ordering(370) 00:16:36.167 fused_ordering(371) 00:16:36.167 fused_ordering(372) 00:16:36.167 fused_ordering(373) 00:16:36.167 fused_ordering(374) 00:16:36.167 fused_ordering(375) 00:16:36.167 fused_ordering(376) 00:16:36.167 fused_ordering(377) 00:16:36.167 fused_ordering(378) 00:16:36.167 fused_ordering(379) 00:16:36.167 fused_ordering(380) 00:16:36.167 fused_ordering(381) 00:16:36.167 fused_ordering(382) 00:16:36.167 fused_ordering(383) 00:16:36.167 fused_ordering(384) 00:16:36.167 fused_ordering(385) 00:16:36.167 fused_ordering(386) 00:16:36.167 fused_ordering(387) 00:16:36.167 fused_ordering(388) 00:16:36.167 fused_ordering(389) 00:16:36.167 fused_ordering(390) 00:16:36.167 fused_ordering(391) 00:16:36.167 fused_ordering(392) 00:16:36.167 fused_ordering(393) 00:16:36.167 fused_ordering(394) 00:16:36.167 fused_ordering(395) 00:16:36.167 fused_ordering(396) 00:16:36.167 fused_ordering(397) 00:16:36.167 fused_ordering(398) 00:16:36.167 fused_ordering(399) 00:16:36.167 fused_ordering(400) 00:16:36.167 fused_ordering(401) 00:16:36.167 fused_ordering(402) 00:16:36.167 fused_ordering(403) 00:16:36.167 fused_ordering(404) 00:16:36.167 fused_ordering(405) 00:16:36.167 fused_ordering(406) 00:16:36.167 fused_ordering(407) 00:16:36.167 fused_ordering(408) 00:16:36.167 fused_ordering(409) 00:16:36.167 fused_ordering(410) 00:16:36.735 fused_ordering(411) 00:16:36.735 fused_ordering(412) 00:16:36.735 fused_ordering(413) 00:16:36.735 fused_ordering(414) 00:16:36.735 fused_ordering(415) 00:16:36.735 fused_ordering(416) 00:16:36.735 fused_ordering(417) 00:16:36.735 fused_ordering(418) 00:16:36.735 fused_ordering(419) 00:16:36.735 fused_ordering(420) 00:16:36.735 fused_ordering(421) 00:16:36.735 fused_ordering(422) 00:16:36.735 fused_ordering(423) 00:16:36.735 fused_ordering(424) 00:16:36.735 fused_ordering(425) 00:16:36.735 fused_ordering(426) 00:16:36.735 fused_ordering(427) 00:16:36.735 fused_ordering(428) 00:16:36.735 fused_ordering(429) 00:16:36.735 fused_ordering(430) 00:16:36.735 fused_ordering(431) 00:16:36.735 fused_ordering(432) 00:16:36.735 fused_ordering(433) 00:16:36.735 fused_ordering(434) 00:16:36.735 fused_ordering(435) 00:16:36.735 fused_ordering(436) 00:16:36.735 fused_ordering(437) 00:16:36.735 fused_ordering(438) 00:16:36.735 fused_ordering(439) 00:16:36.735 fused_ordering(440) 00:16:36.735 fused_ordering(441) 00:16:36.735 fused_ordering(442) 00:16:36.735 fused_ordering(443) 00:16:36.735 fused_ordering(444) 00:16:36.735 fused_ordering(445) 00:16:36.735 fused_ordering(446) 00:16:36.735 fused_ordering(447) 00:16:36.735 fused_ordering(448) 00:16:36.735 fused_ordering(449) 00:16:36.735 fused_ordering(450) 00:16:36.735 fused_ordering(451) 00:16:36.735 fused_ordering(452) 00:16:36.735 fused_ordering(453) 00:16:36.735 fused_ordering(454) 00:16:36.735 fused_ordering(455) 00:16:36.735 fused_ordering(456) 00:16:36.735 fused_ordering(457) 00:16:36.735 fused_ordering(458) 00:16:36.735 fused_ordering(459) 00:16:36.735 fused_ordering(460) 00:16:36.735 fused_ordering(461) 00:16:36.735 fused_ordering(462) 00:16:36.735 fused_ordering(463) 00:16:36.735 fused_ordering(464) 00:16:36.735 fused_ordering(465) 00:16:36.735 fused_ordering(466) 00:16:36.735 fused_ordering(467) 00:16:36.735 fused_ordering(468) 00:16:36.735 fused_ordering(469) 00:16:36.735 fused_ordering(470) 00:16:36.735 fused_ordering(471) 00:16:36.735 fused_ordering(472) 00:16:36.735 fused_ordering(473) 00:16:36.735 fused_ordering(474) 00:16:36.735 fused_ordering(475) 00:16:36.735 fused_ordering(476) 00:16:36.735 fused_ordering(477) 00:16:36.735 fused_ordering(478) 00:16:36.735 fused_ordering(479) 00:16:36.735 fused_ordering(480) 00:16:36.735 fused_ordering(481) 00:16:36.735 fused_ordering(482) 00:16:36.735 fused_ordering(483) 00:16:36.735 fused_ordering(484) 00:16:36.735 fused_ordering(485) 00:16:36.735 fused_ordering(486) 00:16:36.736 fused_ordering(487) 00:16:36.736 fused_ordering(488) 00:16:36.736 fused_ordering(489) 00:16:36.736 fused_ordering(490) 00:16:36.736 fused_ordering(491) 00:16:36.736 fused_ordering(492) 00:16:36.736 fused_ordering(493) 00:16:36.736 fused_ordering(494) 00:16:36.736 fused_ordering(495) 00:16:36.736 fused_ordering(496) 00:16:36.736 fused_ordering(497) 00:16:36.736 fused_ordering(498) 00:16:36.736 fused_ordering(499) 00:16:36.736 fused_ordering(500) 00:16:36.736 fused_ordering(501) 00:16:36.736 fused_ordering(502) 00:16:36.736 fused_ordering(503) 00:16:36.736 fused_ordering(504) 00:16:36.736 fused_ordering(505) 00:16:36.736 fused_ordering(506) 00:16:36.736 fused_ordering(507) 00:16:36.736 fused_ordering(508) 00:16:36.736 fused_ordering(509) 00:16:36.736 fused_ordering(510) 00:16:36.736 fused_ordering(511) 00:16:36.736 fused_ordering(512) 00:16:36.736 fused_ordering(513) 00:16:36.736 fused_ordering(514) 00:16:36.736 fused_ordering(515) 00:16:36.736 fused_ordering(516) 00:16:36.736 fused_ordering(517) 00:16:36.736 fused_ordering(518) 00:16:36.736 fused_ordering(519) 00:16:36.736 fused_ordering(520) 00:16:36.736 fused_ordering(521) 00:16:36.736 fused_ordering(522) 00:16:36.736 fused_ordering(523) 00:16:36.736 fused_ordering(524) 00:16:36.736 fused_ordering(525) 00:16:36.736 fused_ordering(526) 00:16:36.736 fused_ordering(527) 00:16:36.736 fused_ordering(528) 00:16:36.736 fused_ordering(529) 00:16:36.736 fused_ordering(530) 00:16:36.736 fused_ordering(531) 00:16:36.736 fused_ordering(532) 00:16:36.736 fused_ordering(533) 00:16:36.736 fused_ordering(534) 00:16:36.736 fused_ordering(535) 00:16:36.736 fused_ordering(536) 00:16:36.736 fused_ordering(537) 00:16:36.736 fused_ordering(538) 00:16:36.736 fused_ordering(539) 00:16:36.736 fused_ordering(540) 00:16:36.736 fused_ordering(541) 00:16:36.736 fused_ordering(542) 00:16:36.736 fused_ordering(543) 00:16:36.736 fused_ordering(544) 00:16:36.736 fused_ordering(545) 00:16:36.736 fused_ordering(546) 00:16:36.736 fused_ordering(547) 00:16:36.736 fused_ordering(548) 00:16:36.736 fused_ordering(549) 00:16:36.736 fused_ordering(550) 00:16:36.736 fused_ordering(551) 00:16:36.736 fused_ordering(552) 00:16:36.736 fused_ordering(553) 00:16:36.736 fused_ordering(554) 00:16:36.736 fused_ordering(555) 00:16:36.736 fused_ordering(556) 00:16:36.736 fused_ordering(557) 00:16:36.736 fused_ordering(558) 00:16:36.736 fused_ordering(559) 00:16:36.736 fused_ordering(560) 00:16:36.736 fused_ordering(561) 00:16:36.736 fused_ordering(562) 00:16:36.736 fused_ordering(563) 00:16:36.736 fused_ordering(564) 00:16:36.736 fused_ordering(565) 00:16:36.736 fused_ordering(566) 00:16:36.736 fused_ordering(567) 00:16:36.736 fused_ordering(568) 00:16:36.736 fused_ordering(569) 00:16:36.736 fused_ordering(570) 00:16:36.736 fused_ordering(571) 00:16:36.736 fused_ordering(572) 00:16:36.736 fused_ordering(573) 00:16:36.736 fused_ordering(574) 00:16:36.736 fused_ordering(575) 00:16:36.736 fused_ordering(576) 00:16:36.736 fused_ordering(577) 00:16:36.736 fused_ordering(578) 00:16:36.736 fused_ordering(579) 00:16:36.736 fused_ordering(580) 00:16:36.736 fused_ordering(581) 00:16:36.736 fused_ordering(582) 00:16:36.736 fused_ordering(583) 00:16:36.736 fused_ordering(584) 00:16:36.736 fused_ordering(585) 00:16:36.736 fused_ordering(586) 00:16:36.736 fused_ordering(587) 00:16:36.736 fused_ordering(588) 00:16:36.736 fused_ordering(589) 00:16:36.736 fused_ordering(590) 00:16:36.736 fused_ordering(591) 00:16:36.736 fused_ordering(592) 00:16:36.736 fused_ordering(593) 00:16:36.736 fused_ordering(594) 00:16:36.736 fused_ordering(595) 00:16:36.736 fused_ordering(596) 00:16:36.736 fused_ordering(597) 00:16:36.736 fused_ordering(598) 00:16:36.736 fused_ordering(599) 00:16:36.736 fused_ordering(600) 00:16:36.736 fused_ordering(601) 00:16:36.736 fused_ordering(602) 00:16:36.736 fused_ordering(603) 00:16:36.736 fused_ordering(604) 00:16:36.736 fused_ordering(605) 00:16:36.736 fused_ordering(606) 00:16:36.736 fused_ordering(607) 00:16:36.736 fused_ordering(608) 00:16:36.736 fused_ordering(609) 00:16:36.736 fused_ordering(610) 00:16:36.736 fused_ordering(611) 00:16:36.736 fused_ordering(612) 00:16:36.736 fused_ordering(613) 00:16:36.736 fused_ordering(614) 00:16:36.736 fused_ordering(615) 00:16:37.304 fused_ordering(616) 00:16:37.304 fused_ordering(617) 00:16:37.304 fused_ordering(618) 00:16:37.304 fused_ordering(619) 00:16:37.304 fused_ordering(620) 00:16:37.304 fused_ordering(621) 00:16:37.304 fused_ordering(622) 00:16:37.304 fused_ordering(623) 00:16:37.304 fused_ordering(624) 00:16:37.304 fused_ordering(625) 00:16:37.304 fused_ordering(626) 00:16:37.304 fused_ordering(627) 00:16:37.304 fused_ordering(628) 00:16:37.304 fused_ordering(629) 00:16:37.304 fused_ordering(630) 00:16:37.304 fused_ordering(631) 00:16:37.304 fused_ordering(632) 00:16:37.304 fused_ordering(633) 00:16:37.304 fused_ordering(634) 00:16:37.304 fused_ordering(635) 00:16:37.304 fused_ordering(636) 00:16:37.304 fused_ordering(637) 00:16:37.304 fused_ordering(638) 00:16:37.304 fused_ordering(639) 00:16:37.304 fused_ordering(640) 00:16:37.304 fused_ordering(641) 00:16:37.304 fused_ordering(642) 00:16:37.304 fused_ordering(643) 00:16:37.304 fused_ordering(644) 00:16:37.304 fused_ordering(645) 00:16:37.304 fused_ordering(646) 00:16:37.304 fused_ordering(647) 00:16:37.304 fused_ordering(648) 00:16:37.304 fused_ordering(649) 00:16:37.304 fused_ordering(650) 00:16:37.304 fused_ordering(651) 00:16:37.304 fused_ordering(652) 00:16:37.304 fused_ordering(653) 00:16:37.304 fused_ordering(654) 00:16:37.304 fused_ordering(655) 00:16:37.304 fused_ordering(656) 00:16:37.304 fused_ordering(657) 00:16:37.304 fused_ordering(658) 00:16:37.304 fused_ordering(659) 00:16:37.304 fused_ordering(660) 00:16:37.304 fused_ordering(661) 00:16:37.304 fused_ordering(662) 00:16:37.304 fused_ordering(663) 00:16:37.304 fused_ordering(664) 00:16:37.304 fused_ordering(665) 00:16:37.304 fused_ordering(666) 00:16:37.304 fused_ordering(667) 00:16:37.304 fused_ordering(668) 00:16:37.304 fused_ordering(669) 00:16:37.304 fused_ordering(670) 00:16:37.304 fused_ordering(671) 00:16:37.304 fused_ordering(672) 00:16:37.304 fused_ordering(673) 00:16:37.304 fused_ordering(674) 00:16:37.304 fused_ordering(675) 00:16:37.304 fused_ordering(676) 00:16:37.304 fused_ordering(677) 00:16:37.304 fused_ordering(678) 00:16:37.304 fused_ordering(679) 00:16:37.304 fused_ordering(680) 00:16:37.304 fused_ordering(681) 00:16:37.304 fused_ordering(682) 00:16:37.304 fused_ordering(683) 00:16:37.304 fused_ordering(684) 00:16:37.304 fused_ordering(685) 00:16:37.304 fused_ordering(686) 00:16:37.304 fused_ordering(687) 00:16:37.304 fused_ordering(688) 00:16:37.304 fused_ordering(689) 00:16:37.304 fused_ordering(690) 00:16:37.304 fused_ordering(691) 00:16:37.304 fused_ordering(692) 00:16:37.304 fused_ordering(693) 00:16:37.304 fused_ordering(694) 00:16:37.304 fused_ordering(695) 00:16:37.304 fused_ordering(696) 00:16:37.304 fused_ordering(697) 00:16:37.304 fused_ordering(698) 00:16:37.304 fused_ordering(699) 00:16:37.304 fused_ordering(700) 00:16:37.305 fused_ordering(701) 00:16:37.305 fused_ordering(702) 00:16:37.305 fused_ordering(703) 00:16:37.305 fused_ordering(704) 00:16:37.305 fused_ordering(705) 00:16:37.305 fused_ordering(706) 00:16:37.305 fused_ordering(707) 00:16:37.305 fused_ordering(708) 00:16:37.305 fused_ordering(709) 00:16:37.305 fused_ordering(710) 00:16:37.305 fused_ordering(711) 00:16:37.305 fused_ordering(712) 00:16:37.305 fused_ordering(713) 00:16:37.305 fused_ordering(714) 00:16:37.305 fused_ordering(715) 00:16:37.305 fused_ordering(716) 00:16:37.305 fused_ordering(717) 00:16:37.305 fused_ordering(718) 00:16:37.305 fused_ordering(719) 00:16:37.305 fused_ordering(720) 00:16:37.305 fused_ordering(721) 00:16:37.305 fused_ordering(722) 00:16:37.305 fused_ordering(723) 00:16:37.305 fused_ordering(724) 00:16:37.305 fused_ordering(725) 00:16:37.305 fused_ordering(726) 00:16:37.305 fused_ordering(727) 00:16:37.305 fused_ordering(728) 00:16:37.305 fused_ordering(729) 00:16:37.305 fused_ordering(730) 00:16:37.305 fused_ordering(731) 00:16:37.305 fused_ordering(732) 00:16:37.305 fused_ordering(733) 00:16:37.305 fused_ordering(734) 00:16:37.305 fused_ordering(735) 00:16:37.305 fused_ordering(736) 00:16:37.305 fused_ordering(737) 00:16:37.305 fused_ordering(738) 00:16:37.305 fused_ordering(739) 00:16:37.305 fused_ordering(740) 00:16:37.305 fused_ordering(741) 00:16:37.305 fused_ordering(742) 00:16:37.305 fused_ordering(743) 00:16:37.305 fused_ordering(744) 00:16:37.305 fused_ordering(745) 00:16:37.305 fused_ordering(746) 00:16:37.305 fused_ordering(747) 00:16:37.305 fused_ordering(748) 00:16:37.305 fused_ordering(749) 00:16:37.305 fused_ordering(750) 00:16:37.305 fused_ordering(751) 00:16:37.305 fused_ordering(752) 00:16:37.305 fused_ordering(753) 00:16:37.305 fused_ordering(754) 00:16:37.305 fused_ordering(755) 00:16:37.305 fused_ordering(756) 00:16:37.305 fused_ordering(757) 00:16:37.305 fused_ordering(758) 00:16:37.305 fused_ordering(759) 00:16:37.305 fused_ordering(760) 00:16:37.305 fused_ordering(761) 00:16:37.305 fused_ordering(762) 00:16:37.305 fused_ordering(763) 00:16:37.305 fused_ordering(764) 00:16:37.305 fused_ordering(765) 00:16:37.305 fused_ordering(766) 00:16:37.305 fused_ordering(767) 00:16:37.305 fused_ordering(768) 00:16:37.305 fused_ordering(769) 00:16:37.305 fused_ordering(770) 00:16:37.305 fused_ordering(771) 00:16:37.305 fused_ordering(772) 00:16:37.305 fused_ordering(773) 00:16:37.305 fused_ordering(774) 00:16:37.305 fused_ordering(775) 00:16:37.305 fused_ordering(776) 00:16:37.305 fused_ordering(777) 00:16:37.305 fused_ordering(778) 00:16:37.305 fused_ordering(779) 00:16:37.305 fused_ordering(780) 00:16:37.305 fused_ordering(781) 00:16:37.305 fused_ordering(782) 00:16:37.305 fused_ordering(783) 00:16:37.305 fused_ordering(784) 00:16:37.305 fused_ordering(785) 00:16:37.305 fused_ordering(786) 00:16:37.305 fused_ordering(787) 00:16:37.305 fused_ordering(788) 00:16:37.305 fused_ordering(789) 00:16:37.305 fused_ordering(790) 00:16:37.305 fused_ordering(791) 00:16:37.305 fused_ordering(792) 00:16:37.305 fused_ordering(793) 00:16:37.305 fused_ordering(794) 00:16:37.305 fused_ordering(795) 00:16:37.305 fused_ordering(796) 00:16:37.305 fused_ordering(797) 00:16:37.305 fused_ordering(798) 00:16:37.305 fused_ordering(799) 00:16:37.305 fused_ordering(800) 00:16:37.305 fused_ordering(801) 00:16:37.305 fused_ordering(802) 00:16:37.305 fused_ordering(803) 00:16:37.305 fused_ordering(804) 00:16:37.305 fused_ordering(805) 00:16:37.305 fused_ordering(806) 00:16:37.305 fused_ordering(807) 00:16:37.305 fused_ordering(808) 00:16:37.305 fused_ordering(809) 00:16:37.305 fused_ordering(810) 00:16:37.305 fused_ordering(811) 00:16:37.305 fused_ordering(812) 00:16:37.305 fused_ordering(813) 00:16:37.305 fused_ordering(814) 00:16:37.305 fused_ordering(815) 00:16:37.305 fused_ordering(816) 00:16:37.305 fused_ordering(817) 00:16:37.305 fused_ordering(818) 00:16:37.305 fused_ordering(819) 00:16:37.305 fused_ordering(820) 00:16:37.873 fused_ordering(821) 00:16:37.873 fused_ordering(822) 00:16:37.873 fused_ordering(823) 00:16:37.873 fused_ordering(824) 00:16:37.873 fused_ordering(825) 00:16:37.873 fused_ordering(826) 00:16:37.873 fused_ordering(827) 00:16:37.873 fused_ordering(828) 00:16:37.873 fused_ordering(829) 00:16:37.873 fused_ordering(830) 00:16:37.873 fused_ordering(831) 00:16:37.873 fused_ordering(832) 00:16:37.873 fused_ordering(833) 00:16:37.873 fused_ordering(834) 00:16:37.873 fused_ordering(835) 00:16:37.873 fused_ordering(836) 00:16:37.873 fused_ordering(837) 00:16:37.873 fused_ordering(838) 00:16:37.873 fused_ordering(839) 00:16:37.873 fused_ordering(840) 00:16:37.873 fused_ordering(841) 00:16:37.873 fused_ordering(842) 00:16:37.873 fused_ordering(843) 00:16:37.873 fused_ordering(844) 00:16:37.873 fused_ordering(845) 00:16:37.873 fused_ordering(846) 00:16:37.873 fused_ordering(847) 00:16:37.873 fused_ordering(848) 00:16:37.873 fused_ordering(849) 00:16:37.873 fused_ordering(850) 00:16:37.873 fused_ordering(851) 00:16:37.873 fused_ordering(852) 00:16:37.873 fused_ordering(853) 00:16:37.873 fused_ordering(854) 00:16:37.873 fused_ordering(855) 00:16:37.873 fused_ordering(856) 00:16:37.873 fused_ordering(857) 00:16:37.873 fused_ordering(858) 00:16:37.873 fused_ordering(859) 00:16:37.873 fused_ordering(860) 00:16:37.873 fused_ordering(861) 00:16:37.873 fused_ordering(862) 00:16:37.873 fused_ordering(863) 00:16:37.873 fused_ordering(864) 00:16:37.873 fused_ordering(865) 00:16:37.873 fused_ordering(866) 00:16:37.873 fused_ordering(867) 00:16:37.873 fused_ordering(868) 00:16:37.873 fused_ordering(869) 00:16:37.873 fused_ordering(870) 00:16:37.873 fused_ordering(871) 00:16:37.873 fused_ordering(872) 00:16:37.873 fused_ordering(873) 00:16:37.873 fused_ordering(874) 00:16:37.873 fused_ordering(875) 00:16:37.873 fused_ordering(876) 00:16:37.873 fused_ordering(877) 00:16:37.873 fused_ordering(878) 00:16:37.873 fused_ordering(879) 00:16:37.873 fused_ordering(880) 00:16:37.873 fused_ordering(881) 00:16:37.873 fused_ordering(882) 00:16:37.873 fused_ordering(883) 00:16:37.873 fused_ordering(884) 00:16:37.873 fused_ordering(885) 00:16:37.873 fused_ordering(886) 00:16:37.873 fused_ordering(887) 00:16:37.873 fused_ordering(888) 00:16:37.873 fused_ordering(889) 00:16:37.873 fused_ordering(890) 00:16:37.873 fused_ordering(891) 00:16:37.873 fused_ordering(892) 00:16:37.873 fused_ordering(893) 00:16:37.873 fused_ordering(894) 00:16:37.873 fused_ordering(895) 00:16:37.873 fused_ordering(896) 00:16:37.873 fused_ordering(897) 00:16:37.873 fused_ordering(898) 00:16:37.873 fused_ordering(899) 00:16:37.873 fused_ordering(900) 00:16:37.873 fused_ordering(901) 00:16:37.873 fused_ordering(902) 00:16:37.873 fused_ordering(903) 00:16:37.873 fused_ordering(904) 00:16:37.873 fused_ordering(905) 00:16:37.873 fused_ordering(906) 00:16:37.873 fused_ordering(907) 00:16:37.873 fused_ordering(908) 00:16:37.873 fused_ordering(909) 00:16:37.873 fused_ordering(910) 00:16:37.873 fused_ordering(911) 00:16:37.873 fused_ordering(912) 00:16:37.873 fused_ordering(913) 00:16:37.873 fused_ordering(914) 00:16:37.873 fused_ordering(915) 00:16:37.873 fused_ordering(916) 00:16:37.873 fused_ordering(917) 00:16:37.873 fused_ordering(918) 00:16:37.873 fused_ordering(919) 00:16:37.873 fused_ordering(920) 00:16:37.873 fused_ordering(921) 00:16:37.873 fused_ordering(922) 00:16:37.873 fused_ordering(923) 00:16:37.873 fused_ordering(924) 00:16:37.873 fused_ordering(925) 00:16:37.873 fused_ordering(926) 00:16:37.873 fused_ordering(927) 00:16:37.873 fused_ordering(928) 00:16:37.873 fused_ordering(929) 00:16:37.873 fused_ordering(930) 00:16:37.873 fused_ordering(931) 00:16:37.873 fused_ordering(932) 00:16:37.873 fused_ordering(933) 00:16:37.873 fused_ordering(934) 00:16:37.873 fused_ordering(935) 00:16:37.873 fused_ordering(936) 00:16:37.873 fused_ordering(937) 00:16:37.873 fused_ordering(938) 00:16:37.873 fused_ordering(939) 00:16:37.873 fused_ordering(940) 00:16:37.873 fused_ordering(941) 00:16:37.873 fused_ordering(942) 00:16:37.873 fused_ordering(943) 00:16:37.873 fused_ordering(944) 00:16:37.873 fused_ordering(945) 00:16:37.873 fused_ordering(946) 00:16:37.873 fused_ordering(947) 00:16:37.873 fused_ordering(948) 00:16:37.873 fused_ordering(949) 00:16:37.873 fused_ordering(950) 00:16:37.873 fused_ordering(951) 00:16:37.873 fused_ordering(952) 00:16:37.873 fused_ordering(953) 00:16:37.873 fused_ordering(954) 00:16:37.873 fused_ordering(955) 00:16:37.873 fused_ordering(956) 00:16:37.873 fused_ordering(957) 00:16:37.873 fused_ordering(958) 00:16:37.873 fused_ordering(959) 00:16:37.873 fused_ordering(960) 00:16:37.873 fused_ordering(961) 00:16:37.873 fused_ordering(962) 00:16:37.873 fused_ordering(963) 00:16:37.873 fused_ordering(964) 00:16:37.873 fused_ordering(965) 00:16:37.873 fused_ordering(966) 00:16:37.873 fused_ordering(967) 00:16:37.873 fused_ordering(968) 00:16:37.873 fused_ordering(969) 00:16:37.873 fused_ordering(970) 00:16:37.873 fused_ordering(971) 00:16:37.873 fused_ordering(972) 00:16:37.873 fused_ordering(973) 00:16:37.873 fused_ordering(974) 00:16:37.873 fused_ordering(975) 00:16:37.873 fused_ordering(976) 00:16:37.873 fused_ordering(977) 00:16:37.873 fused_ordering(978) 00:16:37.873 fused_ordering(979) 00:16:37.873 fused_ordering(980) 00:16:37.873 fused_ordering(981) 00:16:37.873 fused_ordering(982) 00:16:37.873 fused_ordering(983) 00:16:37.873 fused_ordering(984) 00:16:37.873 fused_ordering(985) 00:16:37.873 fused_ordering(986) 00:16:37.873 fused_ordering(987) 00:16:37.873 fused_ordering(988) 00:16:37.873 fused_ordering(989) 00:16:37.873 fused_ordering(990) 00:16:37.873 fused_ordering(991) 00:16:37.873 fused_ordering(992) 00:16:37.873 fused_ordering(993) 00:16:37.873 fused_ordering(994) 00:16:37.873 fused_ordering(995) 00:16:37.873 fused_ordering(996) 00:16:37.873 fused_ordering(997) 00:16:37.873 fused_ordering(998) 00:16:37.873 fused_ordering(999) 00:16:37.873 fused_ordering(1000) 00:16:37.873 fused_ordering(1001) 00:16:37.873 fused_ordering(1002) 00:16:37.873 fused_ordering(1003) 00:16:37.873 fused_ordering(1004) 00:16:37.873 fused_ordering(1005) 00:16:37.873 fused_ordering(1006) 00:16:37.873 fused_ordering(1007) 00:16:37.873 fused_ordering(1008) 00:16:37.873 fused_ordering(1009) 00:16:37.874 fused_ordering(1010) 00:16:37.874 fused_ordering(1011) 00:16:37.874 fused_ordering(1012) 00:16:37.874 fused_ordering(1013) 00:16:37.874 fused_ordering(1014) 00:16:37.874 fused_ordering(1015) 00:16:37.874 fused_ordering(1016) 00:16:37.874 fused_ordering(1017) 00:16:37.874 fused_ordering(1018) 00:16:37.874 fused_ordering(1019) 00:16:37.874 fused_ordering(1020) 00:16:37.874 fused_ordering(1021) 00:16:37.874 fused_ordering(1022) 00:16:37.874 fused_ordering(1023) 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.874 rmmod nvme_tcp 00:16:37.874 rmmod nvme_fabrics 00:16:37.874 rmmod nvme_keyring 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 124315 ']' 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 124315 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 124315 ']' 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 124315 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:37.874 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 124315 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 124315' 00:16:38.133 killing process with pid 124315 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 124315 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 124315 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.133 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:40.669 00:16:40.669 real 0m11.011s 00:16:40.669 user 0m6.014s 00:16:40.669 sys 0m5.757s 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:40.669 ************************************ 00:16:40.669 END TEST nvmf_fused_ordering 00:16:40.669 ************************************ 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.669 ************************************ 00:16:40.669 START TEST nvmf_ns_masking 00:16:40.669 ************************************ 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:40.669 * Looking for test storage... 00:16:40.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.669 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.670 --rc genhtml_branch_coverage=1 00:16:40.670 --rc genhtml_function_coverage=1 00:16:40.670 --rc genhtml_legend=1 00:16:40.670 --rc geninfo_all_blocks=1 00:16:40.670 --rc geninfo_unexecuted_blocks=1 00:16:40.670 00:16:40.670 ' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.670 --rc genhtml_branch_coverage=1 00:16:40.670 --rc genhtml_function_coverage=1 00:16:40.670 --rc genhtml_legend=1 00:16:40.670 --rc geninfo_all_blocks=1 00:16:40.670 --rc geninfo_unexecuted_blocks=1 00:16:40.670 00:16:40.670 ' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.670 --rc genhtml_branch_coverage=1 00:16:40.670 --rc genhtml_function_coverage=1 00:16:40.670 --rc genhtml_legend=1 00:16:40.670 --rc geninfo_all_blocks=1 00:16:40.670 --rc geninfo_unexecuted_blocks=1 00:16:40.670 00:16:40.670 ' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.670 --rc genhtml_branch_coverage=1 00:16:40.670 --rc genhtml_function_coverage=1 00:16:40.670 --rc genhtml_legend=1 00:16:40.670 --rc geninfo_all_blocks=1 00:16:40.670 --rc geninfo_unexecuted_blocks=1 00:16:40.670 00:16:40.670 ' 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.670 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f24bc7e6-b4f8-4094-bba1-221cb5eeec1c 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bace0d4c-5b65-4941-bfa7-29ad99d14d9d 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=870064b7-0b4f-48f1-ba1d-df016ffc05a9 00:16:40.670 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.671 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:45.942 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:45.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:45.942 Found net devices under 0000:af:00.0: cvl_0_0 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:45.942 Found net devices under 0000:af:00.1: cvl_0_1 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.942 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:45.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:16:45.943 00:16:45.943 --- 10.0.0.2 ping statistics --- 00:16:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.943 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:16:45.943 00:16:45.943 --- 10.0.0.1 ping statistics --- 00:16:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.943 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.943 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=128385 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 128385 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 128385 ']' 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.202 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:46.203 [2024-11-06 12:23:17.622679] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:46.203 [2024-11-06 12:23:17.622743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.203 [2024-11-06 12:23:17.722229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.203 [2024-11-06 12:23:17.770649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.203 [2024-11-06 12:23:17.770691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.203 [2024-11-06 12:23:17.770702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.203 [2024-11-06 12:23:17.770710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.203 [2024-11-06 12:23:17.770718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.203 [2024-11-06 12:23:17.771406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:47.139 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.140 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:47.140 [2024-11-06 12:23:18.703729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.140 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:47.140 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:47.140 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:47.398 Malloc1 00:16:47.398 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:47.657 Malloc2 00:16:47.657 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.916 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:48.178 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.438 [2024-11-06 12:23:19.916431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.438 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:48.438 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 870064b7-0b4f-48f1-ba1d-df016ffc05a9 -a 10.0.0.2 -s 4420 -i 4 00:16:48.697 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.697 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:48.697 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.697 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:48.697 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:50.602 [ 0]:0x1 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:50.602 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:50.861 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a08aea29068481e9a79059699220aef 00:16:50.861 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a08aea29068481e9a79059699220aef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:50.861 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:51.120 [ 0]:0x1 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:51.120 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a08aea29068481e9a79059699220aef 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a08aea29068481e9a79059699220aef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:51.121 [ 1]:0x2 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.121 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.380 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:51.647 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:51.647 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 870064b7-0b4f-48f1-ba1d-df016ffc05a9 -a 10.0.0.2 -s 4420 -i 4 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:16:51.907 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:53.811 [ 0]:0x2 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:53.811 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.071 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:54.071 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.071 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:54.330 [ 0]:0x1 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a08aea29068481e9a79059699220aef 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a08aea29068481e9a79059699220aef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:54.330 [ 1]:0x2 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.330 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:54.588 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:54.589 [ 0]:0x2 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:54.589 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:54.847 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:54.847 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:54.847 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:54.847 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.847 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:55.106 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:55.106 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 870064b7-0b4f-48f1-ba1d-df016ffc05a9 -a 10.0.0.2 -s 4420 -i 4 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:16:55.365 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:57.270 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.529 [ 0]:0x1 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:57.529 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a08aea29068481e9a79059699220aef 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a08aea29068481e9a79059699220aef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:57.529 [ 1]:0x2 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.529 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.788 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:58.047 [ 0]:0x2 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:58.047 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:58.305 [2024-11-06 12:23:29.783166] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:58.305 request: 00:16:58.305 { 00:16:58.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.305 "nsid": 2, 00:16:58.305 "host": "nqn.2016-06.io.spdk:host1", 00:16:58.305 "method": "nvmf_ns_remove_host", 00:16:58.305 "req_id": 1 00:16:58.305 } 00:16:58.305 Got JSON-RPC error response 00:16:58.305 response: 00:16:58.305 { 00:16:58.305 "code": -32602, 00:16:58.305 "message": "Invalid parameters" 00:16:58.305 } 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:58.305 [ 0]:0x2 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199a77915db442238ea526b6f1436537 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199a77915db442238ea526b6f1436537 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:58.305 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:58.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=130850 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 130850 /var/tmp/host.sock 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 130850 ']' 00:16:58.563 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:58.564 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.564 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:58.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:58.564 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.564 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:58.564 [2024-11-06 12:23:30.024323] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:16:58.564 [2024-11-06 12:23:30.024388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130850 ] 00:16:58.564 [2024-11-06 12:23:30.092699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.564 [2024-11-06 12:23:30.132122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.822 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.822 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:16:58.822 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.080 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:59.337 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f24bc7e6-b4f8-4094-bba1-221cb5eeec1c 00:16:59.337 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:59.337 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F24BC7E6B4F84094BBA1221CB5EEEC1C -i 00:16:59.595 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bace0d4c-5b65-4941-bfa7-29ad99d14d9d 00:16:59.595 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:59.595 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BACE0D4C5B654941BFA729AD99D14D9D -i 00:16:59.852 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:00.110 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:00.368 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:00.368 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:00.935 nvme0n1 00:17:00.935 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:00.935 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:01.193 nvme1n2 00:17:01.193 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:01.193 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:01.193 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:01.193 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:01.193 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:01.451 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:01.451 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:01.451 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:01.451 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:01.709 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f24bc7e6-b4f8-4094-bba1-221cb5eeec1c == \f\2\4\b\c\7\e\6\-\b\4\f\8\-\4\0\9\4\-\b\b\a\1\-\2\2\1\c\b\5\e\e\e\c\1\c ]] 00:17:01.709 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:01.709 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:01.709 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:01.967 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bace0d4c-5b65-4941-bfa7-29ad99d14d9d == \b\a\c\e\0\d\4\c\-\5\b\6\5\-\4\9\4\1\-\b\f\a\7\-\2\9\a\d\9\9\d\1\4\d\9\d ]] 00:17:01.967 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.226 12:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f24bc7e6-b4f8-4094-bba1-221cb5eeec1c 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F24BC7E6B4F84094BBA1221CB5EEEC1C 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F24BC7E6B4F84094BBA1221CB5EEEC1C 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:02.484 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F24BC7E6B4F84094BBA1221CB5EEEC1C 00:17:02.742 [2024-11-06 12:23:34.344657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:02.742 [2024-11-06 12:23:34.344697] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:02.742 [2024-11-06 12:23:34.344708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.742 request: 00:17:02.742 { 00:17:02.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.742 "namespace": { 00:17:02.742 "bdev_name": "invalid", 00:17:02.742 "nsid": 1, 00:17:02.742 "nguid": "F24BC7E6B4F84094BBA1221CB5EEEC1C", 00:17:02.742 "no_auto_visible": false 00:17:02.742 }, 00:17:02.742 "method": "nvmf_subsystem_add_ns", 00:17:02.742 "req_id": 1 00:17:02.742 } 00:17:02.742 Got JSON-RPC error response 00:17:02.742 response: 00:17:02.742 { 00:17:02.742 "code": -32602, 00:17:02.742 "message": "Invalid parameters" 00:17:02.742 } 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f24bc7e6-b4f8-4094-bba1-221cb5eeec1c 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F24BC7E6B4F84094BBA1221CB5EEEC1C -i 00:17:03.000 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:05.529 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:05.529 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 130850 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 130850 ']' 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 130850 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 130850 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 130850' 00:17:05.530 killing process with pid 130850 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 130850 00:17:05.530 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 130850 00:17:05.788 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.047 rmmod nvme_tcp 00:17:06.047 rmmod nvme_fabrics 00:17:06.047 rmmod nvme_keyring 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 128385 ']' 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 128385 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 128385 ']' 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 128385 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 128385 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 128385' 00:17:06.047 killing process with pid 128385 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 128385 00:17:06.047 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 128385 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.306 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.837 00:17:08.837 real 0m28.069s 00:17:08.837 user 0m36.252s 00:17:08.837 sys 0m6.940s 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:08.837 ************************************ 00:17:08.837 END TEST nvmf_ns_masking 00:17:08.837 ************************************ 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.837 ************************************ 00:17:08.837 START TEST nvmf_nvme_cli 00:17:08.837 ************************************ 00:17:08.837 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:08.837 * Looking for test storage... 00:17:08.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.837 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:08.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.838 --rc genhtml_branch_coverage=1 00:17:08.838 --rc genhtml_function_coverage=1 00:17:08.838 --rc genhtml_legend=1 00:17:08.838 --rc geninfo_all_blocks=1 00:17:08.838 --rc geninfo_unexecuted_blocks=1 00:17:08.838 00:17:08.838 ' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:08.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.838 --rc genhtml_branch_coverage=1 00:17:08.838 --rc genhtml_function_coverage=1 00:17:08.838 --rc genhtml_legend=1 00:17:08.838 --rc geninfo_all_blocks=1 00:17:08.838 --rc geninfo_unexecuted_blocks=1 00:17:08.838 00:17:08.838 ' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:08.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.838 --rc genhtml_branch_coverage=1 00:17:08.838 --rc genhtml_function_coverage=1 00:17:08.838 --rc genhtml_legend=1 00:17:08.838 --rc geninfo_all_blocks=1 00:17:08.838 --rc geninfo_unexecuted_blocks=1 00:17:08.838 00:17:08.838 ' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:08.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.838 --rc genhtml_branch_coverage=1 00:17:08.838 --rc genhtml_function_coverage=1 00:17:08.838 --rc genhtml_legend=1 00:17:08.838 --rc geninfo_all_blocks=1 00:17:08.838 --rc geninfo_unexecuted_blocks=1 00:17:08.838 00:17:08.838 ' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.838 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:14.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:14.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:14.105 Found net devices under 0000:af:00.0: cvl_0_0 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.105 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:14.106 Found net devices under 0000:af:00.1: cvl_0_1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:17:14.106 00:17:14.106 --- 10.0.0.2 ping statistics --- 00:17:14.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.106 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:14.106 00:17:14.106 --- 10.0.0.1 ping statistics --- 00:17:14.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.106 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=135897 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 135897 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 135897 ']' 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.106 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.106 [2024-11-06 12:23:45.581905] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:17:14.106 [2024-11-06 12:23:45.581963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.106 [2024-11-06 12:23:45.682056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.365 [2024-11-06 12:23:45.733850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.365 [2024-11-06 12:23:45.733890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.365 [2024-11-06 12:23:45.733900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.365 [2024-11-06 12:23:45.733909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.365 [2024-11-06 12:23:45.733916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.365 [2024-11-06 12:23:45.735974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.365 [2024-11-06 12:23:45.736079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.365 [2024-11-06 12:23:45.736181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.365 [2024-11-06 12:23:45.736184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 [2024-11-06 12:23:45.884035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 Malloc0 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 Malloc1 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 [2024-11-06 12:23:45.968136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.365 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:14.623 00:17:14.623 Discovery Log Number of Records 2, Generation counter 2 00:17:14.623 =====Discovery Log Entry 0====== 00:17:14.623 trtype: tcp 00:17:14.623 adrfam: ipv4 00:17:14.623 subtype: current discovery subsystem 00:17:14.623 treq: not required 00:17:14.623 portid: 0 00:17:14.623 trsvcid: 4420 00:17:14.623 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:14.623 traddr: 10.0.0.2 00:17:14.623 eflags: explicit discovery connections, duplicate discovery information 00:17:14.623 sectype: none 00:17:14.623 =====Discovery Log Entry 1====== 00:17:14.623 trtype: tcp 00:17:14.623 adrfam: ipv4 00:17:14.623 subtype: nvme subsystem 00:17:14.623 treq: not required 00:17:14.623 portid: 0 00:17:14.623 trsvcid: 4420 00:17:14.623 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:14.623 traddr: 10.0.0.2 00:17:14.623 eflags: none 00:17:14.623 sectype: none 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:14.623 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:17:16.001 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:18.528 /dev/nvme0n2 ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:18.528 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.787 rmmod nvme_tcp 00:17:18.787 rmmod nvme_fabrics 00:17:18.787 rmmod nvme_keyring 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 135897 ']' 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 135897 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 135897 ']' 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 135897 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 135897 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 135897' 00:17:18.787 killing process with pid 135897 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 135897 00:17:18.787 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 135897 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.046 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:21.580 00:17:21.580 real 0m12.700s 00:17:21.580 user 0m20.724s 00:17:21.580 sys 0m4.718s 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:21.580 ************************************ 00:17:21.580 END TEST nvmf_nvme_cli 00:17:21.580 ************************************ 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.580 ************************************ 00:17:21.580 START TEST nvmf_vfio_user 00:17:21.580 ************************************ 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:21.580 * Looking for test storage... 00:17:21.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.580 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.581 --rc genhtml_branch_coverage=1 00:17:21.581 --rc genhtml_function_coverage=1 00:17:21.581 --rc genhtml_legend=1 00:17:21.581 --rc geninfo_all_blocks=1 00:17:21.581 --rc geninfo_unexecuted_blocks=1 00:17:21.581 00:17:21.581 ' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.581 --rc genhtml_branch_coverage=1 00:17:21.581 --rc genhtml_function_coverage=1 00:17:21.581 --rc genhtml_legend=1 00:17:21.581 --rc geninfo_all_blocks=1 00:17:21.581 --rc geninfo_unexecuted_blocks=1 00:17:21.581 00:17:21.581 ' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.581 --rc genhtml_branch_coverage=1 00:17:21.581 --rc genhtml_function_coverage=1 00:17:21.581 --rc genhtml_legend=1 00:17:21.581 --rc geninfo_all_blocks=1 00:17:21.581 --rc geninfo_unexecuted_blocks=1 00:17:21.581 00:17:21.581 ' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.581 --rc genhtml_branch_coverage=1 00:17:21.581 --rc genhtml_function_coverage=1 00:17:21.581 --rc genhtml_legend=1 00:17:21.581 --rc geninfo_all_blocks=1 00:17:21.581 --rc geninfo_unexecuted_blocks=1 00:17:21.581 00:17:21.581 ' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=137374 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 137374' 00:17:21.581 Process pid: 137374 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 137374 00:17:21.581 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 137374 ']' 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:21.582 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:21.582 [2024-11-06 12:23:52.998363] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:17:21.582 [2024-11-06 12:23:52.998427] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.582 [2024-11-06 12:23:53.092616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.582 [2024-11-06 12:23:53.142589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.582 [2024-11-06 12:23:53.142630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.582 [2024-11-06 12:23:53.142640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.582 [2024-11-06 12:23:53.142649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.582 [2024-11-06 12:23:53.142657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.582 [2024-11-06 12:23:53.144528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.582 [2024-11-06 12:23:53.144559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.582 [2024-11-06 12:23:53.144669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.582 [2024-11-06 12:23:53.144670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.840 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:21.840 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:17:21.840 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:22.777 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:23.083 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:23.083 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:23.083 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:23.083 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:23.083 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:23.349 Malloc1 00:17:23.349 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:23.622 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:23.898 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:24.194 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:24.194 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:24.194 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:24.476 Malloc2 00:17:24.476 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:24.734 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:24.991 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:25.249 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:25.249 [2024-11-06 12:23:56.771891] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:17:25.249 [2024-11-06 12:23:56.771927] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137941 ] 00:17:25.249 [2024-11-06 12:23:56.828064] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:25.249 [2024-11-06 12:23:56.836846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.249 [2024-11-06 12:23:56.836877] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd8320d9000 00:17:25.249 [2024-11-06 12:23:56.837847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.838842] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.839850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.840860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.841858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.842867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.843869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.844888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.249 [2024-11-06 12:23:56.845890] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.249 [2024-11-06 12:23:56.845908] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd8320ce000 00:17:25.249 [2024-11-06 12:23:56.847319] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.508 [2024-11-06 12:23:56.868164] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:25.508 [2024-11-06 12:23:56.868195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:25.508 [2024-11-06 12:23:56.871047] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:25.508 [2024-11-06 12:23:56.871101] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:25.508 [2024-11-06 12:23:56.871196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:25.508 [2024-11-06 12:23:56.871216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:25.508 [2024-11-06 12:23:56.871224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:25.508 [2024-11-06 12:23:56.872046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:25.508 [2024-11-06 12:23:56.872059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:25.508 [2024-11-06 12:23:56.872068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:25.508 [2024-11-06 12:23:56.873053] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:25.508 [2024-11-06 12:23:56.873065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:25.508 [2024-11-06 12:23:56.873075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.874059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:25.509 [2024-11-06 12:23:56.874070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.875063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:25.509 [2024-11-06 12:23:56.875075] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:25.509 [2024-11-06 12:23:56.875082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.875091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.875201] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:25.509 [2024-11-06 12:23:56.875208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.875215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:25.509 [2024-11-06 12:23:56.878469] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:25.509 [2024-11-06 12:23:56.879082] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:25.509 [2024-11-06 12:23:56.880088] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:25.509 [2024-11-06 12:23:56.881091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.509 [2024-11-06 12:23:56.881157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:25.509 [2024-11-06 12:23:56.882104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:25.509 [2024-11-06 12:23:56.882116] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:25.509 [2024-11-06 12:23:56.882123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:25.509 [2024-11-06 12:23:56.882158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.509 [2024-11-06 12:23:56.882184] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.509 [2024-11-06 12:23:56.882189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.509 [2024-11-06 12:23:56.882205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882257] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:25.509 [2024-11-06 12:23:56.882263] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:25.509 [2024-11-06 12:23:56.882270] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:25.509 [2024-11-06 12:23:56.882276] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:25.509 [2024-11-06 12:23:56.882284] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:25.509 [2024-11-06 12:23:56.882291] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:25.509 [2024-11-06 12:23:56.882297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.509 [2024-11-06 12:23:56.882358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.509 [2024-11-06 12:23:56.882369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.509 [2024-11-06 12:23:56.882379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.509 [2024-11-06 12:23:56.882386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882426] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:25.509 [2024-11-06 12:23:56.882433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882584] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:25.509 [2024-11-06 12:23:56.882590] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:25.509 [2024-11-06 12:23:56.882594] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.509 [2024-11-06 12:23:56.882603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882633] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:25.509 [2024-11-06 12:23:56.882647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882667] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.509 [2024-11-06 12:23:56.882673] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.509 [2024-11-06 12:23:56.882680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.509 [2024-11-06 12:23:56.882688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882739] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.509 [2024-11-06 12:23:56.882745] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.509 [2024-11-06 12:23:56.882750] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.509 [2024-11-06 12:23:56.882758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.509 [2024-11-06 12:23:56.882772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:25.509 [2024-11-06 12:23:56.882782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:25.509 [2024-11-06 12:23:56.882828] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:25.509 [2024-11-06 12:23:56.882834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:25.510 [2024-11-06 12:23:56.882841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:25.510 [2024-11-06 12:23:56.882861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.882889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.882898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.882912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.882921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.882938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.882951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.882967] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:25.510 [2024-11-06 12:23:56.882973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:25.510 [2024-11-06 12:23:56.882978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:25.510 [2024-11-06 12:23:56.882982] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:25.510 [2024-11-06 12:23:56.882987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:25.510 [2024-11-06 12:23:56.882995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:25.510 [2024-11-06 12:23:56.883005] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:25.510 [2024-11-06 12:23:56.883011] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:25.510 [2024-11-06 12:23:56.883015] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.510 [2024-11-06 12:23:56.883023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.883032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:25.510 [2024-11-06 12:23:56.883038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.510 [2024-11-06 12:23:56.883043] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.510 [2024-11-06 12:23:56.883051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.883061] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:25.510 [2024-11-06 12:23:56.883066] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:25.510 [2024-11-06 12:23:56.883071] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.510 [2024-11-06 12:23:56.883079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:25.510 [2024-11-06 12:23:56.883088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.883104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.883117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:25.510 [2024-11-06 12:23:56.883126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:25.510 ===================================================== 00:17:25.510 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:25.510 ===================================================== 00:17:25.510 Controller Capabilities/Features 00:17:25.510 ================================ 00:17:25.510 Vendor ID: 4e58 00:17:25.510 Subsystem Vendor ID: 4e58 00:17:25.510 Serial Number: SPDK1 00:17:25.510 Model Number: SPDK bdev Controller 00:17:25.510 Firmware Version: 25.01 00:17:25.510 Recommended Arb Burst: 6 00:17:25.510 IEEE OUI Identifier: 8d 6b 50 00:17:25.510 Multi-path I/O 00:17:25.510 May have multiple subsystem ports: Yes 00:17:25.510 May have multiple controllers: Yes 00:17:25.510 Associated with SR-IOV VF: No 00:17:25.510 Max Data Transfer Size: 131072 00:17:25.510 Max Number of Namespaces: 32 00:17:25.510 Max Number of I/O Queues: 127 00:17:25.510 NVMe Specification Version (VS): 1.3 00:17:25.510 NVMe Specification Version (Identify): 1.3 00:17:25.510 Maximum Queue Entries: 256 00:17:25.510 Contiguous Queues Required: Yes 00:17:25.510 Arbitration Mechanisms Supported 00:17:25.510 Weighted Round Robin: Not Supported 00:17:25.510 Vendor Specific: Not Supported 00:17:25.510 Reset Timeout: 15000 ms 00:17:25.510 Doorbell Stride: 4 bytes 00:17:25.510 NVM Subsystem Reset: Not Supported 00:17:25.510 Command Sets Supported 00:17:25.510 NVM Command Set: Supported 00:17:25.510 Boot Partition: Not Supported 00:17:25.510 Memory Page Size Minimum: 4096 bytes 00:17:25.510 Memory Page Size Maximum: 4096 bytes 00:17:25.510 Persistent Memory Region: Not Supported 00:17:25.510 Optional Asynchronous Events Supported 00:17:25.510 Namespace Attribute Notices: Supported 00:17:25.510 Firmware Activation Notices: Not Supported 00:17:25.510 ANA Change Notices: Not Supported 00:17:25.510 PLE Aggregate Log Change Notices: Not Supported 00:17:25.510 LBA Status Info Alert Notices: Not Supported 00:17:25.510 EGE Aggregate Log Change Notices: Not Supported 00:17:25.510 Normal NVM Subsystem Shutdown event: Not Supported 00:17:25.510 Zone Descriptor Change Notices: Not Supported 00:17:25.510 Discovery Log Change Notices: Not Supported 00:17:25.510 Controller Attributes 00:17:25.510 128-bit Host Identifier: Supported 00:17:25.510 Non-Operational Permissive Mode: Not Supported 00:17:25.510 NVM Sets: Not Supported 00:17:25.510 Read Recovery Levels: Not Supported 00:17:25.510 Endurance Groups: Not Supported 00:17:25.510 Predictable Latency Mode: Not Supported 00:17:25.510 Traffic Based Keep ALive: Not Supported 00:17:25.510 Namespace Granularity: Not Supported 00:17:25.510 SQ Associations: Not Supported 00:17:25.510 UUID List: Not Supported 00:17:25.510 Multi-Domain Subsystem: Not Supported 00:17:25.510 Fixed Capacity Management: Not Supported 00:17:25.510 Variable Capacity Management: Not Supported 00:17:25.510 Delete Endurance Group: Not Supported 00:17:25.510 Delete NVM Set: Not Supported 00:17:25.510 Extended LBA Formats Supported: Not Supported 00:17:25.510 Flexible Data Placement Supported: Not Supported 00:17:25.510 00:17:25.510 Controller Memory Buffer Support 00:17:25.510 ================================ 00:17:25.510 Supported: No 00:17:25.510 00:17:25.510 Persistent Memory Region Support 00:17:25.510 ================================ 00:17:25.510 Supported: No 00:17:25.510 00:17:25.510 Admin Command Set Attributes 00:17:25.510 ============================ 00:17:25.510 Security Send/Receive: Not Supported 00:17:25.510 Format NVM: Not Supported 00:17:25.510 Firmware Activate/Download: Not Supported 00:17:25.510 Namespace Management: Not Supported 00:17:25.510 Device Self-Test: Not Supported 00:17:25.510 Directives: Not Supported 00:17:25.510 NVMe-MI: Not Supported 00:17:25.510 Virtualization Management: Not Supported 00:17:25.510 Doorbell Buffer Config: Not Supported 00:17:25.510 Get LBA Status Capability: Not Supported 00:17:25.510 Command & Feature Lockdown Capability: Not Supported 00:17:25.510 Abort Command Limit: 4 00:17:25.510 Async Event Request Limit: 4 00:17:25.510 Number of Firmware Slots: N/A 00:17:25.510 Firmware Slot 1 Read-Only: N/A 00:17:25.510 Firmware Activation Without Reset: N/A 00:17:25.510 Multiple Update Detection Support: N/A 00:17:25.510 Firmware Update Granularity: No Information Provided 00:17:25.510 Per-Namespace SMART Log: No 00:17:25.510 Asymmetric Namespace Access Log Page: Not Supported 00:17:25.510 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:25.510 Command Effects Log Page: Supported 00:17:25.510 Get Log Page Extended Data: Supported 00:17:25.510 Telemetry Log Pages: Not Supported 00:17:25.510 Persistent Event Log Pages: Not Supported 00:17:25.510 Supported Log Pages Log Page: May Support 00:17:25.510 Commands Supported & Effects Log Page: Not Supported 00:17:25.510 Feature Identifiers & Effects Log Page:May Support 00:17:25.510 NVMe-MI Commands & Effects Log Page: May Support 00:17:25.510 Data Area 4 for Telemetry Log: Not Supported 00:17:25.510 Error Log Page Entries Supported: 128 00:17:25.510 Keep Alive: Supported 00:17:25.510 Keep Alive Granularity: 10000 ms 00:17:25.510 00:17:25.510 NVM Command Set Attributes 00:17:25.510 ========================== 00:17:25.510 Submission Queue Entry Size 00:17:25.510 Max: 64 00:17:25.510 Min: 64 00:17:25.510 Completion Queue Entry Size 00:17:25.510 Max: 16 00:17:25.510 Min: 16 00:17:25.510 Number of Namespaces: 32 00:17:25.510 Compare Command: Supported 00:17:25.510 Write Uncorrectable Command: Not Supported 00:17:25.510 Dataset Management Command: Supported 00:17:25.510 Write Zeroes Command: Supported 00:17:25.510 Set Features Save Field: Not Supported 00:17:25.510 Reservations: Not Supported 00:17:25.511 Timestamp: Not Supported 00:17:25.511 Copy: Supported 00:17:25.511 Volatile Write Cache: Present 00:17:25.511 Atomic Write Unit (Normal): 1 00:17:25.511 Atomic Write Unit (PFail): 1 00:17:25.511 Atomic Compare & Write Unit: 1 00:17:25.511 Fused Compare & Write: Supported 00:17:25.511 Scatter-Gather List 00:17:25.511 SGL Command Set: Supported (Dword aligned) 00:17:25.511 SGL Keyed: Not Supported 00:17:25.511 SGL Bit Bucket Descriptor: Not Supported 00:17:25.511 SGL Metadata Pointer: Not Supported 00:17:25.511 Oversized SGL: Not Supported 00:17:25.511 SGL Metadata Address: Not Supported 00:17:25.511 SGL Offset: Not Supported 00:17:25.511 Transport SGL Data Block: Not Supported 00:17:25.511 Replay Protected Memory Block: Not Supported 00:17:25.511 00:17:25.511 Firmware Slot Information 00:17:25.511 ========================= 00:17:25.511 Active slot: 1 00:17:25.511 Slot 1 Firmware Revision: 25.01 00:17:25.511 00:17:25.511 00:17:25.511 Commands Supported and Effects 00:17:25.511 ============================== 00:17:25.511 Admin Commands 00:17:25.511 -------------- 00:17:25.511 Get Log Page (02h): Supported 00:17:25.511 Identify (06h): Supported 00:17:25.511 Abort (08h): Supported 00:17:25.511 Set Features (09h): Supported 00:17:25.511 Get Features (0Ah): Supported 00:17:25.511 Asynchronous Event Request (0Ch): Supported 00:17:25.511 Keep Alive (18h): Supported 00:17:25.511 I/O Commands 00:17:25.511 ------------ 00:17:25.511 Flush (00h): Supported LBA-Change 00:17:25.511 Write (01h): Supported LBA-Change 00:17:25.511 Read (02h): Supported 00:17:25.511 Compare (05h): Supported 00:17:25.511 Write Zeroes (08h): Supported LBA-Change 00:17:25.511 Dataset Management (09h): Supported LBA-Change 00:17:25.511 Copy (19h): Supported LBA-Change 00:17:25.511 00:17:25.511 Error Log 00:17:25.511 ========= 00:17:25.511 00:17:25.511 Arbitration 00:17:25.511 =========== 00:17:25.511 Arbitration Burst: 1 00:17:25.511 00:17:25.511 Power Management 00:17:25.511 ================ 00:17:25.511 Number of Power States: 1 00:17:25.511 Current Power State: Power State #0 00:17:25.511 Power State #0: 00:17:25.511 Max Power: 0.00 W 00:17:25.511 Non-Operational State: Operational 00:17:25.511 Entry Latency: Not Reported 00:17:25.511 Exit Latency: Not Reported 00:17:25.511 Relative Read Throughput: 0 00:17:25.511 Relative Read Latency: 0 00:17:25.511 Relative Write Throughput: 0 00:17:25.511 Relative Write Latency: 0 00:17:25.511 Idle Power: Not Reported 00:17:25.511 Active Power: Not Reported 00:17:25.511 Non-Operational Permissive Mode: Not Supported 00:17:25.511 00:17:25.511 Health Information 00:17:25.511 ================== 00:17:25.511 Critical Warnings: 00:17:25.511 Available Spare Space: OK 00:17:25.511 Temperature: OK 00:17:25.511 Device Reliability: OK 00:17:25.511 Read Only: No 00:17:25.511 Volatile Memory Backup: OK 00:17:25.511 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:25.511 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:25.511 Available Spare: 0% 00:17:25.511 Available Sp[2024-11-06 12:23:56.883249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:25.511 [2024-11-06 12:23:56.883263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:25.511 [2024-11-06 12:23:56.883298] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:25.511 [2024-11-06 12:23:56.883310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.511 [2024-11-06 12:23:56.883318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.511 [2024-11-06 12:23:56.883329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.511 [2024-11-06 12:23:56.883337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.511 [2024-11-06 12:23:56.884125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:25.511 [2024-11-06 12:23:56.884138] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:25.511 [2024-11-06 12:23:56.885125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.511 [2024-11-06 12:23:56.885176] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:25.511 [2024-11-06 12:23:56.885185] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:25.511 [2024-11-06 12:23:56.886134] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:25.511 [2024-11-06 12:23:56.886149] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:25.511 [2024-11-06 12:23:56.886207] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:25.511 [2024-11-06 12:23:56.891469] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.511 are Threshold: 0% 00:17:25.511 Life Percentage Used: 0% 00:17:25.511 Data Units Read: 0 00:17:25.511 Data Units Written: 0 00:17:25.511 Host Read Commands: 0 00:17:25.511 Host Write Commands: 0 00:17:25.511 Controller Busy Time: 0 minutes 00:17:25.511 Power Cycles: 0 00:17:25.511 Power On Hours: 0 hours 00:17:25.511 Unsafe Shutdowns: 0 00:17:25.511 Unrecoverable Media Errors: 0 00:17:25.511 Lifetime Error Log Entries: 0 00:17:25.511 Warning Temperature Time: 0 minutes 00:17:25.511 Critical Temperature Time: 0 minutes 00:17:25.511 00:17:25.511 Number of Queues 00:17:25.511 ================ 00:17:25.511 Number of I/O Submission Queues: 127 00:17:25.511 Number of I/O Completion Queues: 127 00:17:25.511 00:17:25.511 Active Namespaces 00:17:25.511 ================= 00:17:25.511 Namespace ID:1 00:17:25.511 Error Recovery Timeout: Unlimited 00:17:25.511 Command Set Identifier: NVM (00h) 00:17:25.511 Deallocate: Supported 00:17:25.511 Deallocated/Unwritten Error: Not Supported 00:17:25.511 Deallocated Read Value: Unknown 00:17:25.511 Deallocate in Write Zeroes: Not Supported 00:17:25.511 Deallocated Guard Field: 0xFFFF 00:17:25.511 Flush: Supported 00:17:25.511 Reservation: Supported 00:17:25.511 Namespace Sharing Capabilities: Multiple Controllers 00:17:25.511 Size (in LBAs): 131072 (0GiB) 00:17:25.511 Capacity (in LBAs): 131072 (0GiB) 00:17:25.511 Utilization (in LBAs): 131072 (0GiB) 00:17:25.511 NGUID: 21CC95A59E9143148AE1BBD55925A472 00:17:25.511 UUID: 21cc95a5-9e91-4314-8ae1-bbd55925a472 00:17:25.511 Thin Provisioning: Not Supported 00:17:25.511 Per-NS Atomic Units: Yes 00:17:25.511 Atomic Boundary Size (Normal): 0 00:17:25.511 Atomic Boundary Size (PFail): 0 00:17:25.511 Atomic Boundary Offset: 0 00:17:25.511 Maximum Single Source Range Length: 65535 00:17:25.511 Maximum Copy Length: 65535 00:17:25.511 Maximum Source Range Count: 1 00:17:25.511 NGUID/EUI64 Never Reused: No 00:17:25.511 Namespace Write Protected: No 00:17:25.511 Number of LBA Formats: 1 00:17:25.511 Current LBA Format: LBA Format #00 00:17:25.511 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:25.511 00:17:25.511 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:25.769 [2024-11-06 12:23:57.134314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:31.023 Initializing NVMe Controllers 00:17:31.023 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:31.023 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:31.023 Initialization complete. Launching workers. 00:17:31.023 ======================================================== 00:17:31.023 Latency(us) 00:17:31.023 Device Information : IOPS MiB/s Average min max 00:17:31.023 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40096.00 156.62 3192.28 870.16 6853.77 00:17:31.023 ======================================================== 00:17:31.023 Total : 40096.00 156.62 3192.28 870.16 6853.77 00:17:31.023 00:17:31.023 [2024-11-06 12:24:02.155485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:31.023 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:31.023 [2024-11-06 12:24:02.389666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.284 Initializing NVMe Controllers 00:17:36.284 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:36.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:36.284 Initialization complete. Launching workers. 00:17:36.284 ======================================================== 00:17:36.284 Latency(us) 00:17:36.284 Device Information : IOPS MiB/s Average min max 00:17:36.284 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16024.98 62.60 7998.75 7619.57 11974.55 00:17:36.284 ======================================================== 00:17:36.284 Total : 16024.98 62.60 7998.75 7619.57 11974.55 00:17:36.284 00:17:36.284 [2024-11-06 12:24:07.427322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.284 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:36.284 [2024-11-06 12:24:07.642372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:41.546 [2024-11-06 12:24:12.701705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:41.546 Initializing NVMe Controllers 00:17:41.546 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:41.546 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:41.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:41.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:41.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:41.546 Initialization complete. Launching workers. 00:17:41.546 Starting thread on core 2 00:17:41.546 Starting thread on core 3 00:17:41.546 Starting thread on core 1 00:17:41.546 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:41.546 [2024-11-06 12:24:13.042236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:44.827 [2024-11-06 12:24:16.096674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:44.827 Initializing NVMe Controllers 00:17:44.827 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:44.827 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:44.827 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:44.827 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:44.827 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:44.827 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:44.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:44.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:44.827 Initialization complete. Launching workers. 00:17:44.827 Starting thread on core 1 with urgent priority queue 00:17:44.827 Starting thread on core 2 with urgent priority queue 00:17:44.827 Starting thread on core 3 with urgent priority queue 00:17:44.827 Starting thread on core 0 with urgent priority queue 00:17:44.827 SPDK bdev Controller (SPDK1 ) core 0: 8387.67 IO/s 11.92 secs/100000 ios 00:17:44.827 SPDK bdev Controller (SPDK1 ) core 1: 8959.33 IO/s 11.16 secs/100000 ios 00:17:44.827 SPDK bdev Controller (SPDK1 ) core 2: 7952.33 IO/s 12.57 secs/100000 ios 00:17:44.827 SPDK bdev Controller (SPDK1 ) core 3: 9802.33 IO/s 10.20 secs/100000 ios 00:17:44.827 ======================================================== 00:17:44.827 00:17:44.827 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:45.084 [2024-11-06 12:24:16.450975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.084 Initializing NVMe Controllers 00:17:45.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.084 Namespace ID: 1 size: 0GB 00:17:45.084 Initialization complete. 00:17:45.084 INFO: using host memory buffer for IO 00:17:45.084 Hello world! 00:17:45.084 [2024-11-06 12:24:16.483188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:45.084 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:45.342 [2024-11-06 12:24:16.835941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:46.275 Initializing NVMe Controllers 00:17:46.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.275 Initialization complete. Launching workers. 00:17:46.275 submit (in ns) avg, min, max = 9022.9, 4558.2, 4003071.8 00:17:46.275 complete (in ns) avg, min, max = 21008.3, 2718.2, 5995348.2 00:17:46.275 00:17:46.275 Submit histogram 00:17:46.275 ================ 00:17:46.275 Range in us Cumulative Count 00:17:46.275 4.538 - 4.567: 0.0058% ( 1) 00:17:46.275 4.567 - 4.596: 0.0525% ( 8) 00:17:46.275 4.596 - 4.625: 1.4818% ( 245) 00:17:46.275 4.625 - 4.655: 4.2530% ( 475) 00:17:46.275 4.655 - 4.684: 7.8642% ( 619) 00:17:46.275 4.684 - 4.713: 15.5884% ( 1324) 00:17:46.275 4.713 - 4.742: 32.2793% ( 2861) 00:17:46.275 4.742 - 4.771: 44.8282% ( 2151) 00:17:46.275 4.771 - 4.800: 55.3818% ( 1809) 00:17:46.275 4.800 - 4.829: 66.9448% ( 1982) 00:17:46.275 4.829 - 4.858: 76.2791% ( 1600) 00:17:46.275 4.858 - 4.887: 83.5891% ( 1253) 00:17:46.275 4.887 - 4.916: 86.6227% ( 520) 00:17:46.275 4.916 - 4.945: 87.7778% ( 198) 00:17:46.275 4.945 - 4.975: 88.5946% ( 140) 00:17:46.275 4.975 - 5.004: 90.3681% ( 304) 00:17:46.275 5.004 - 5.033: 92.2292% ( 319) 00:17:46.275 5.033 - 5.062: 94.1485% ( 329) 00:17:46.275 5.062 - 5.091: 95.9454% ( 308) 00:17:46.275 5.091 - 5.120: 97.4506% ( 258) 00:17:46.275 5.120 - 5.149: 98.2323% ( 134) 00:17:46.275 5.149 - 5.178: 98.8974% ( 114) 00:17:46.275 5.178 - 5.207: 99.2007% ( 52) 00:17:46.275 5.207 - 5.236: 99.3874% ( 32) 00:17:46.275 5.236 - 5.265: 99.4049% ( 3) 00:17:46.275 5.265 - 5.295: 99.4399% ( 6) 00:17:46.275 5.295 - 5.324: 99.4458% ( 1) 00:17:46.275 5.324 - 5.353: 99.4516% ( 1) 00:17:46.275 6.865 - 6.895: 99.4574% ( 1) 00:17:46.275 7.127 - 7.156: 99.4691% ( 2) 00:17:46.275 7.302 - 7.331: 99.4808% ( 2) 00:17:46.275 7.622 - 7.680: 99.4924% ( 2) 00:17:46.275 7.680 - 7.738: 99.4983% ( 1) 00:17:46.275 7.738 - 7.796: 99.5099% ( 2) 00:17:46.275 7.796 - 7.855: 99.5158% ( 1) 00:17:46.275 7.971 - 8.029: 99.5216% ( 1) 00:17:46.275 8.145 - 8.204: 99.5274% ( 1) 00:17:46.275 8.262 - 8.320: 99.5333% ( 1) 00:17:46.275 8.320 - 8.378: 99.5450% ( 2) 00:17:46.275 8.495 - 8.553: 99.5508% ( 1) 00:17:46.275 8.553 - 8.611: 99.5683% ( 3) 00:17:46.275 8.611 - 8.669: 99.5858% ( 3) 00:17:46.275 8.669 - 8.727: 99.6033% ( 3) 00:17:46.275 8.844 - 8.902: 99.6266% ( 4) 00:17:46.275 8.902 - 8.960: 99.6325% ( 1) 00:17:46.275 8.960 - 9.018: 99.6383% ( 1) 00:17:46.275 9.193 - 9.251: 99.6441% ( 1) 00:17:46.275 9.251 - 9.309: 99.6558% ( 2) 00:17:46.275 9.309 - 9.367: 99.6733% ( 3) 00:17:46.275 9.367 - 9.425: 99.6908% ( 3) 00:17:46.275 9.425 - 9.484: 99.7083% ( 3) 00:17:46.275 9.484 - 9.542: 99.7141% ( 1) 00:17:46.275 9.600 - 9.658: 99.7433% ( 5) 00:17:46.275 9.775 - 9.833: 99.7491% ( 1) 00:17:46.275 9.833 - 9.891: 99.7666% ( 3) 00:17:46.275 9.891 - 9.949: 99.7725% ( 1) 00:17:46.275 9.949 - 10.007: 99.7900% ( 3) 00:17:46.275 10.182 - 10.240: 99.7958% ( 1) 00:17:46.275 10.473 - 10.531: 99.8016% ( 1) 00:17:46.275 10.531 - 10.589: 99.8075% ( 1) 00:17:46.275 10.589 - 10.647: 99.8133% ( 1) 00:17:46.275 10.705 - 10.764: 99.8191% ( 1) 00:17:46.275 10.822 - 10.880: 99.8366% ( 3) 00:17:46.275 11.113 - 11.171: 99.8425% ( 1) 00:17:46.275 11.171 - 11.229: 99.8483% ( 1) 00:17:46.275 11.229 - 11.287: 99.8542% ( 1) 00:17:46.275 11.287 - 11.345: 99.8600% ( 1) 00:17:46.275 11.404 - 11.462: 99.8658% ( 1) 00:17:46.275 11.462 - 11.520: 99.8717% ( 1) 00:17:46.275 11.636 - 11.695: 99.8775% ( 1) 00:17:46.275 11.869 - 11.927: 99.8833% ( 1) 00:17:46.275 12.044 - 12.102: 99.8892% ( 1) 00:17:46.275 15.942 - 16.058: 99.8950% ( 1) 00:17:46.275 3991.738 - 4021.527: 100.0000% ( 18) 00:17:46.275 00:17:46.275 Complete histogram 00:17:46.275 ================== 00:17:46.275 Range in us Cumulative Count 00:17:46.275 2.705 - 2.720: 0.0058% ( 1) 00:17:46.275 2.720 - 2.735: 0.1108% ( 18) 00:17:46.275 2.735 - 2.749: 0.5717% ( 79) 00:17:46.275 2.749 - [2024-11-06 12:24:17.861914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.533 2.764: 2.5436% ( 338) 00:17:46.533 2.764 - 2.778: 21.5273% ( 3254) 00:17:46.533 2.778 - 2.793: 61.3325% ( 6823) 00:17:46.533 2.793 - 2.807: 79.9720% ( 3195) 00:17:46.533 2.807 - 2.822: 85.6601% ( 975) 00:17:46.533 2.822 - 2.836: 89.0671% ( 584) 00:17:46.533 2.836 - 2.851: 91.2374% ( 372) 00:17:46.533 2.851 - 2.865: 94.2710% ( 520) 00:17:46.533 2.865 - 2.880: 97.1647% ( 496) 00:17:46.533 2.880 - 2.895: 98.5590% ( 239) 00:17:46.533 2.895 - 2.909: 99.0024% ( 76) 00:17:46.533 2.909 - 2.924: 99.1482% ( 25) 00:17:46.533 2.924 - 2.938: 99.1716% ( 4) 00:17:46.533 2.938 - 2.953: 99.1774% ( 1) 00:17:46.533 2.953 - 2.967: 99.1891% ( 2) 00:17:46.533 2.982 - 2.996: 99.1949% ( 1) 00:17:46.533 3.025 - 3.040: 99.2007% ( 1) 00:17:46.533 5.120 - 5.149: 99.2066% ( 1) 00:17:46.533 5.178 - 5.207: 99.2124% ( 1) 00:17:46.533 5.440 - 5.469: 99.2182% ( 1) 00:17:46.533 5.527 - 5.556: 99.2241% ( 1) 00:17:46.533 5.702 - 5.731: 99.2299% ( 1) 00:17:46.533 5.789 - 5.818: 99.2358% ( 1) 00:17:46.533 6.109 - 6.138: 99.2474% ( 2) 00:17:46.533 6.138 - 6.167: 99.2533% ( 1) 00:17:46.533 6.167 - 6.196: 99.2591% ( 1) 00:17:46.533 6.196 - 6.225: 99.2649% ( 1) 00:17:46.533 6.225 - 6.255: 99.2708% ( 1) 00:17:46.533 6.255 - 6.284: 99.2766% ( 1) 00:17:46.533 6.313 - 6.342: 99.2824% ( 1) 00:17:46.533 6.400 - 6.429: 99.2883% ( 1) 00:17:46.533 6.429 - 6.458: 99.2941% ( 1) 00:17:46.533 6.487 - 6.516: 99.2999% ( 1) 00:17:46.533 6.575 - 6.604: 99.3058% ( 1) 00:17:46.533 6.662 - 6.691: 99.3174% ( 2) 00:17:46.533 6.778 - 6.807: 99.3233% ( 1) 00:17:46.533 6.836 - 6.865: 99.3291% ( 1) 00:17:46.533 6.924 - 6.953: 99.3349% ( 1) 00:17:46.533 6.982 - 7.011: 99.3408% ( 1) 00:17:46.533 7.011 - 7.040: 99.3466% ( 1) 00:17:46.533 7.069 - 7.098: 99.3524% ( 1) 00:17:46.534 7.098 - 7.127: 99.3583% ( 1) 00:17:46.534 7.127 - 7.156: 99.3641% ( 1) 00:17:46.534 7.156 - 7.185: 99.3758% ( 2) 00:17:46.534 7.185 - 7.215: 99.3816% ( 1) 00:17:46.534 7.215 - 7.244: 99.3874% ( 1) 00:17:46.534 7.244 - 7.273: 99.4049% ( 3) 00:17:46.534 7.389 - 7.418: 99.4224% ( 3) 00:17:46.534 7.418 - 7.447: 99.4341% ( 2) 00:17:46.534 7.505 - 7.564: 99.4399% ( 1) 00:17:46.534 7.680 - 7.738: 99.4458% ( 1) 00:17:46.534 7.738 - 7.796: 99.4516% ( 1) 00:17:46.534 7.796 - 7.855: 99.4574% ( 1) 00:17:46.534 7.971 - 8.029: 99.4633% ( 1) 00:17:46.534 8.087 - 8.145: 99.4749% ( 2) 00:17:46.534 8.145 - 8.204: 99.4866% ( 2) 00:17:46.534 8.262 - 8.320: 99.4924% ( 1) 00:17:46.534 8.553 - 8.611: 99.4983% ( 1) 00:17:46.534 8.785 - 8.844: 99.5041% ( 1) 00:17:46.534 8.960 - 9.018: 99.5099% ( 1) 00:17:46.534 9.018 - 9.076: 99.5158% ( 1) 00:17:46.534 9.484 - 9.542: 99.5216% ( 1) 00:17:46.534 9.658 - 9.716: 99.5274% ( 1) 00:17:46.534 9.716 - 9.775: 99.5333% ( 1) 00:17:46.534 10.124 - 10.182: 99.5391% ( 1) 00:17:46.534 10.182 - 10.240: 99.5450% ( 1) 00:17:46.534 2174.604 - 2189.498: 99.5508% ( 1) 00:17:46.534 3991.738 - 4021.527: 99.9942% ( 76) 00:17:46.534 5987.607 - 6017.396: 100.0000% ( 1) 00:17:46.534 00:17:46.534 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:46.534 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:46.534 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:46.534 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:46.534 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:46.792 [ 00:17:46.792 { 00:17:46.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.792 "subtype": "Discovery", 00:17:46.792 "listen_addresses": [], 00:17:46.792 "allow_any_host": true, 00:17:46.792 "hosts": [] 00:17:46.792 }, 00:17:46.792 { 00:17:46.792 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:46.792 "subtype": "NVMe", 00:17:46.792 "listen_addresses": [ 00:17:46.792 { 00:17:46.792 "trtype": "VFIOUSER", 00:17:46.792 "adrfam": "IPv4", 00:17:46.792 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:46.792 "trsvcid": "0" 00:17:46.792 } 00:17:46.792 ], 00:17:46.792 "allow_any_host": true, 00:17:46.792 "hosts": [], 00:17:46.792 "serial_number": "SPDK1", 00:17:46.792 "model_number": "SPDK bdev Controller", 00:17:46.792 "max_namespaces": 32, 00:17:46.792 "min_cntlid": 1, 00:17:46.792 "max_cntlid": 65519, 00:17:46.792 "namespaces": [ 00:17:46.792 { 00:17:46.792 "nsid": 1, 00:17:46.792 "bdev_name": "Malloc1", 00:17:46.792 "name": "Malloc1", 00:17:46.792 "nguid": "21CC95A59E9143148AE1BBD55925A472", 00:17:46.792 "uuid": "21cc95a5-9e91-4314-8ae1-bbd55925a472" 00:17:46.792 } 00:17:46.792 ] 00:17:46.792 }, 00:17:46.792 { 00:17:46.792 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:46.792 "subtype": "NVMe", 00:17:46.792 "listen_addresses": [ 00:17:46.792 { 00:17:46.792 "trtype": "VFIOUSER", 00:17:46.792 "adrfam": "IPv4", 00:17:46.792 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:46.792 "trsvcid": "0" 00:17:46.792 } 00:17:46.792 ], 00:17:46.792 "allow_any_host": true, 00:17:46.792 "hosts": [], 00:17:46.792 "serial_number": "SPDK2", 00:17:46.792 "model_number": "SPDK bdev Controller", 00:17:46.792 "max_namespaces": 32, 00:17:46.792 "min_cntlid": 1, 00:17:46.792 "max_cntlid": 65519, 00:17:46.792 "namespaces": [ 00:17:46.792 { 00:17:46.792 "nsid": 1, 00:17:46.792 "bdev_name": "Malloc2", 00:17:46.792 "name": "Malloc2", 00:17:46.792 "nguid": "579701095CD34189ADA1D580A2F9AF8A", 00:17:46.792 "uuid": "57970109-5cd3-4189-ada1-d580a2f9af8a" 00:17:46.792 } 00:17:46.792 ] 00:17:46.792 } 00:17:46.792 ] 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=142379 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:46.792 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:46.792 [2024-11-06 12:24:18.395513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.050 Malloc3 00:17:47.050 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:47.307 [2024-11-06 12:24:18.766071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.307 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.307 Asynchronous Event Request test 00:17:47.307 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:47.307 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:47.307 Registering asynchronous event callbacks... 00:17:47.307 Starting namespace attribute notice tests for all controllers... 00:17:47.307 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:47.307 aer_cb - Changed Namespace 00:17:47.307 Cleaning up... 00:17:47.566 [ 00:17:47.566 { 00:17:47.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.566 "subtype": "Discovery", 00:17:47.566 "listen_addresses": [], 00:17:47.566 "allow_any_host": true, 00:17:47.566 "hosts": [] 00:17:47.566 }, 00:17:47.566 { 00:17:47.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.566 "subtype": "NVMe", 00:17:47.566 "listen_addresses": [ 00:17:47.566 { 00:17:47.566 "trtype": "VFIOUSER", 00:17:47.566 "adrfam": "IPv4", 00:17:47.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.566 "trsvcid": "0" 00:17:47.566 } 00:17:47.566 ], 00:17:47.566 "allow_any_host": true, 00:17:47.566 "hosts": [], 00:17:47.566 "serial_number": "SPDK1", 00:17:47.566 "model_number": "SPDK bdev Controller", 00:17:47.566 "max_namespaces": 32, 00:17:47.566 "min_cntlid": 1, 00:17:47.566 "max_cntlid": 65519, 00:17:47.566 "namespaces": [ 00:17:47.566 { 00:17:47.566 "nsid": 1, 00:17:47.566 "bdev_name": "Malloc1", 00:17:47.566 "name": "Malloc1", 00:17:47.566 "nguid": "21CC95A59E9143148AE1BBD55925A472", 00:17:47.566 "uuid": "21cc95a5-9e91-4314-8ae1-bbd55925a472" 00:17:47.566 }, 00:17:47.566 { 00:17:47.566 "nsid": 2, 00:17:47.566 "bdev_name": "Malloc3", 00:17:47.566 "name": "Malloc3", 00:17:47.566 "nguid": "58B9C11FFDBC4E13B315D2C227A1230E", 00:17:47.566 "uuid": "58b9c11f-fdbc-4e13-b315-d2c227a1230e" 00:17:47.566 } 00:17:47.566 ] 00:17:47.566 }, 00:17:47.566 { 00:17:47.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.566 "subtype": "NVMe", 00:17:47.566 "listen_addresses": [ 00:17:47.566 { 00:17:47.566 "trtype": "VFIOUSER", 00:17:47.566 "adrfam": "IPv4", 00:17:47.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.566 "trsvcid": "0" 00:17:47.566 } 00:17:47.566 ], 00:17:47.566 "allow_any_host": true, 00:17:47.566 "hosts": [], 00:17:47.566 "serial_number": "SPDK2", 00:17:47.566 "model_number": "SPDK bdev Controller", 00:17:47.566 "max_namespaces": 32, 00:17:47.566 "min_cntlid": 1, 00:17:47.566 "max_cntlid": 65519, 00:17:47.566 "namespaces": [ 00:17:47.566 { 00:17:47.566 "nsid": 1, 00:17:47.566 "bdev_name": "Malloc2", 00:17:47.566 "name": "Malloc2", 00:17:47.566 "nguid": "579701095CD34189ADA1D580A2F9AF8A", 00:17:47.566 "uuid": "57970109-5cd3-4189-ada1-d580a2f9af8a" 00:17:47.566 } 00:17:47.566 ] 00:17:47.566 } 00:17:47.566 ] 00:17:47.566 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 142379 00:17:47.566 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.566 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:47.566 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:47.566 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.566 [2024-11-06 12:24:19.071545] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:17:47.566 [2024-11-06 12:24:19.071581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142488 ] 00:17:47.566 [2024-11-06 12:24:19.127994] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:47.566 [2024-11-06 12:24:19.137774] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.566 [2024-11-06 12:24:19.137806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7953722000 00:17:47.566 [2024-11-06 12:24:19.138781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.139790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.140793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.141798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.142811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.143821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.144836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.145841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.566 [2024-11-06 12:24:19.146854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.566 [2024-11-06 12:24:19.146869] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7953717000 00:17:47.566 [2024-11-06 12:24:19.148281] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.566 [2024-11-06 12:24:19.165055] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:47.566 [2024-11-06 12:24:19.165092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:47.566 [2024-11-06 12:24:19.170196] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:47.566 [2024-11-06 12:24:19.170257] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.566 [2024-11-06 12:24:19.170356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:47.566 [2024-11-06 12:24:19.170373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:47.566 [2024-11-06 12:24:19.170381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:47.566 [2024-11-06 12:24:19.171204] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:47.566 [2024-11-06 12:24:19.171218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:47.566 [2024-11-06 12:24:19.171227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:47.566 [2024-11-06 12:24:19.172204] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:47.566 [2024-11-06 12:24:19.172217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:47.566 [2024-11-06 12:24:19.172227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.566 [2024-11-06 12:24:19.173213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:47.566 [2024-11-06 12:24:19.173227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.566 [2024-11-06 12:24:19.174223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:47.566 [2024-11-06 12:24:19.174234] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:47.566 [2024-11-06 12:24:19.174241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:47.566 [2024-11-06 12:24:19.174250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.566 [2024-11-06 12:24:19.174361] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:47.566 [2024-11-06 12:24:19.174368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.566 [2024-11-06 12:24:19.174375] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:47.566 [2024-11-06 12:24:19.175235] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:47.567 [2024-11-06 12:24:19.176236] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:47.567 [2024-11-06 12:24:19.177248] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:47.567 [2024-11-06 12:24:19.178243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.567 [2024-11-06 12:24:19.178295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.567 [2024-11-06 12:24:19.179261] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:47.567 [2024-11-06 12:24:19.179277] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.567 [2024-11-06 12:24:19.179284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:47.567 [2024-11-06 12:24:19.179309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:47.567 [2024-11-06 12:24:19.179323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.567 [2024-11-06 12:24:19.179339] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.567 [2024-11-06 12:24:19.179346] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.567 [2024-11-06 12:24:19.179350] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.567 [2024-11-06 12:24:19.179365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.826 [2024-11-06 12:24:19.187471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.826 [2024-11-06 12:24:19.187488] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:47.826 [2024-11-06 12:24:19.187495] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:47.826 [2024-11-06 12:24:19.187501] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:47.826 [2024-11-06 12:24:19.187507] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.826 [2024-11-06 12:24:19.187517] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:47.826 [2024-11-06 12:24:19.187523] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:47.826 [2024-11-06 12:24:19.187530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.187542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.187555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.826 [2024-11-06 12:24:19.195468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.826 [2024-11-06 12:24:19.195486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.826 [2024-11-06 12:24:19.195497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.826 [2024-11-06 12:24:19.195508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.826 [2024-11-06 12:24:19.195518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.826 [2024-11-06 12:24:19.195525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.195534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.195550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.826 [2024-11-06 12:24:19.203467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.826 [2024-11-06 12:24:19.203482] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:47.826 [2024-11-06 12:24:19.203490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.203498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.203506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.826 [2024-11-06 12:24:19.203517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.826 [2024-11-06 12:24:19.211467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.826 [2024-11-06 12:24:19.211550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.211562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.211572] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.827 [2024-11-06 12:24:19.211578] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.827 [2024-11-06 12:24:19.211583] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.211591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.219467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.219482] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:47.827 [2024-11-06 12:24:19.219493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.219504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.219513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.827 [2024-11-06 12:24:19.219519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.827 [2024-11-06 12:24:19.219524] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.219532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.227468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.227488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.227499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.227513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.827 [2024-11-06 12:24:19.227519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.827 [2024-11-06 12:24:19.227523] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.227532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.235468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.235481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235528] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.827 [2024-11-06 12:24:19.235534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:47.827 [2024-11-06 12:24:19.235541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:47.827 [2024-11-06 12:24:19.235562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.243466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.243485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.251485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.259467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.259485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.267465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.267487] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.827 [2024-11-06 12:24:19.267494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.827 [2024-11-06 12:24:19.267499] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.827 [2024-11-06 12:24:19.267503] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.827 [2024-11-06 12:24:19.267508] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.827 [2024-11-06 12:24:19.267519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.827 [2024-11-06 12:24:19.267529] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.827 [2024-11-06 12:24:19.267535] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.827 [2024-11-06 12:24:19.267540] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.267547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.267557] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.827 [2024-11-06 12:24:19.267562] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.827 [2024-11-06 12:24:19.267567] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.267575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.267584] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.827 [2024-11-06 12:24:19.267590] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.827 [2024-11-06 12:24:19.267595] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.827 [2024-11-06 12:24:19.267602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.827 [2024-11-06 12:24:19.275469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.275487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.827 [2024-11-06 12:24:19.275510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.827 ===================================================== 00:17:47.827 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.827 ===================================================== 00:17:47.827 Controller Capabilities/Features 00:17:47.827 ================================ 00:17:47.827 Vendor ID: 4e58 00:17:47.827 Subsystem Vendor ID: 4e58 00:17:47.827 Serial Number: SPDK2 00:17:47.827 Model Number: SPDK bdev Controller 00:17:47.827 Firmware Version: 25.01 00:17:47.827 Recommended Arb Burst: 6 00:17:47.827 IEEE OUI Identifier: 8d 6b 50 00:17:47.827 Multi-path I/O 00:17:47.827 May have multiple subsystem ports: Yes 00:17:47.827 May have multiple controllers: Yes 00:17:47.827 Associated with SR-IOV VF: No 00:17:47.827 Max Data Transfer Size: 131072 00:17:47.827 Max Number of Namespaces: 32 00:17:47.827 Max Number of I/O Queues: 127 00:17:47.827 NVMe Specification Version (VS): 1.3 00:17:47.827 NVMe Specification Version (Identify): 1.3 00:17:47.827 Maximum Queue Entries: 256 00:17:47.827 Contiguous Queues Required: Yes 00:17:47.827 Arbitration Mechanisms Supported 00:17:47.827 Weighted Round Robin: Not Supported 00:17:47.827 Vendor Specific: Not Supported 00:17:47.827 Reset Timeout: 15000 ms 00:17:47.827 Doorbell Stride: 4 bytes 00:17:47.827 NVM Subsystem Reset: Not Supported 00:17:47.827 Command Sets Supported 00:17:47.827 NVM Command Set: Supported 00:17:47.827 Boot Partition: Not Supported 00:17:47.827 Memory Page Size Minimum: 4096 bytes 00:17:47.827 Memory Page Size Maximum: 4096 bytes 00:17:47.827 Persistent Memory Region: Not Supported 00:17:47.827 Optional Asynchronous Events Supported 00:17:47.827 Namespace Attribute Notices: Supported 00:17:47.827 Firmware Activation Notices: Not Supported 00:17:47.827 ANA Change Notices: Not Supported 00:17:47.827 PLE Aggregate Log Change Notices: Not Supported 00:17:47.827 LBA Status Info Alert Notices: Not Supported 00:17:47.828 EGE Aggregate Log Change Notices: Not Supported 00:17:47.828 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.828 Zone Descriptor Change Notices: Not Supported 00:17:47.828 Discovery Log Change Notices: Not Supported 00:17:47.828 Controller Attributes 00:17:47.828 128-bit Host Identifier: Supported 00:17:47.828 Non-Operational Permissive Mode: Not Supported 00:17:47.828 NVM Sets: Not Supported 00:17:47.828 Read Recovery Levels: Not Supported 00:17:47.828 Endurance Groups: Not Supported 00:17:47.828 Predictable Latency Mode: Not Supported 00:17:47.828 Traffic Based Keep ALive: Not Supported 00:17:47.828 Namespace Granularity: Not Supported 00:17:47.828 SQ Associations: Not Supported 00:17:47.828 UUID List: Not Supported 00:17:47.828 Multi-Domain Subsystem: Not Supported 00:17:47.828 Fixed Capacity Management: Not Supported 00:17:47.828 Variable Capacity Management: Not Supported 00:17:47.828 Delete Endurance Group: Not Supported 00:17:47.828 Delete NVM Set: Not Supported 00:17:47.828 Extended LBA Formats Supported: Not Supported 00:17:47.828 Flexible Data Placement Supported: Not Supported 00:17:47.828 00:17:47.828 Controller Memory Buffer Support 00:17:47.828 ================================ 00:17:47.828 Supported: No 00:17:47.828 00:17:47.828 Persistent Memory Region Support 00:17:47.828 ================================ 00:17:47.828 Supported: No 00:17:47.828 00:17:47.828 Admin Command Set Attributes 00:17:47.828 ============================ 00:17:47.828 Security Send/Receive: Not Supported 00:17:47.828 Format NVM: Not Supported 00:17:47.828 Firmware Activate/Download: Not Supported 00:17:47.828 Namespace Management: Not Supported 00:17:47.828 Device Self-Test: Not Supported 00:17:47.828 Directives: Not Supported 00:17:47.828 NVMe-MI: Not Supported 00:17:47.828 Virtualization Management: Not Supported 00:17:47.828 Doorbell Buffer Config: Not Supported 00:17:47.828 Get LBA Status Capability: Not Supported 00:17:47.828 Command & Feature Lockdown Capability: Not Supported 00:17:47.828 Abort Command Limit: 4 00:17:47.828 Async Event Request Limit: 4 00:17:47.828 Number of Firmware Slots: N/A 00:17:47.828 Firmware Slot 1 Read-Only: N/A 00:17:47.828 Firmware Activation Without Reset: N/A 00:17:47.828 Multiple Update Detection Support: N/A 00:17:47.828 Firmware Update Granularity: No Information Provided 00:17:47.828 Per-Namespace SMART Log: No 00:17:47.828 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.828 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:47.828 Command Effects Log Page: Supported 00:17:47.828 Get Log Page Extended Data: Supported 00:17:47.828 Telemetry Log Pages: Not Supported 00:17:47.828 Persistent Event Log Pages: Not Supported 00:17:47.828 Supported Log Pages Log Page: May Support 00:17:47.828 Commands Supported & Effects Log Page: Not Supported 00:17:47.828 Feature Identifiers & Effects Log Page:May Support 00:17:47.828 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.828 Data Area 4 for Telemetry Log: Not Supported 00:17:47.828 Error Log Page Entries Supported: 128 00:17:47.828 Keep Alive: Supported 00:17:47.828 Keep Alive Granularity: 10000 ms 00:17:47.828 00:17:47.828 NVM Command Set Attributes 00:17:47.828 ========================== 00:17:47.828 Submission Queue Entry Size 00:17:47.828 Max: 64 00:17:47.828 Min: 64 00:17:47.828 Completion Queue Entry Size 00:17:47.828 Max: 16 00:17:47.828 Min: 16 00:17:47.828 Number of Namespaces: 32 00:17:47.828 Compare Command: Supported 00:17:47.828 Write Uncorrectable Command: Not Supported 00:17:47.828 Dataset Management Command: Supported 00:17:47.828 Write Zeroes Command: Supported 00:17:47.828 Set Features Save Field: Not Supported 00:17:47.828 Reservations: Not Supported 00:17:47.828 Timestamp: Not Supported 00:17:47.828 Copy: Supported 00:17:47.828 Volatile Write Cache: Present 00:17:47.828 Atomic Write Unit (Normal): 1 00:17:47.828 Atomic Write Unit (PFail): 1 00:17:47.828 Atomic Compare & Write Unit: 1 00:17:47.828 Fused Compare & Write: Supported 00:17:47.828 Scatter-Gather List 00:17:47.828 SGL Command Set: Supported (Dword aligned) 00:17:47.828 SGL Keyed: Not Supported 00:17:47.828 SGL Bit Bucket Descriptor: Not Supported 00:17:47.828 SGL Metadata Pointer: Not Supported 00:17:47.828 Oversized SGL: Not Supported 00:17:47.828 SGL Metadata Address: Not Supported 00:17:47.828 SGL Offset: Not Supported 00:17:47.828 Transport SGL Data Block: Not Supported 00:17:47.828 Replay Protected Memory Block: Not Supported 00:17:47.828 00:17:47.828 Firmware Slot Information 00:17:47.828 ========================= 00:17:47.828 Active slot: 1 00:17:47.828 Slot 1 Firmware Revision: 25.01 00:17:47.828 00:17:47.828 00:17:47.828 Commands Supported and Effects 00:17:47.828 ============================== 00:17:47.828 Admin Commands 00:17:47.828 -------------- 00:17:47.828 Get Log Page (02h): Supported 00:17:47.828 Identify (06h): Supported 00:17:47.828 Abort (08h): Supported 00:17:47.828 Set Features (09h): Supported 00:17:47.828 Get Features (0Ah): Supported 00:17:47.828 Asynchronous Event Request (0Ch): Supported 00:17:47.828 Keep Alive (18h): Supported 00:17:47.828 I/O Commands 00:17:47.828 ------------ 00:17:47.828 Flush (00h): Supported LBA-Change 00:17:47.828 Write (01h): Supported LBA-Change 00:17:47.828 Read (02h): Supported 00:17:47.828 Compare (05h): Supported 00:17:47.828 Write Zeroes (08h): Supported LBA-Change 00:17:47.828 Dataset Management (09h): Supported LBA-Change 00:17:47.828 Copy (19h): Supported LBA-Change 00:17:47.828 00:17:47.828 Error Log 00:17:47.828 ========= 00:17:47.828 00:17:47.828 Arbitration 00:17:47.828 =========== 00:17:47.828 Arbitration Burst: 1 00:17:47.828 00:17:47.828 Power Management 00:17:47.828 ================ 00:17:47.828 Number of Power States: 1 00:17:47.828 Current Power State: Power State #0 00:17:47.828 Power State #0: 00:17:47.828 Max Power: 0.00 W 00:17:47.828 Non-Operational State: Operational 00:17:47.828 Entry Latency: Not Reported 00:17:47.828 Exit Latency: Not Reported 00:17:47.828 Relative Read Throughput: 0 00:17:47.828 Relative Read Latency: 0 00:17:47.828 Relative Write Throughput: 0 00:17:47.828 Relative Write Latency: 0 00:17:47.828 Idle Power: Not Reported 00:17:47.828 Active Power: Not Reported 00:17:47.828 Non-Operational Permissive Mode: Not Supported 00:17:47.828 00:17:47.828 Health Information 00:17:47.828 ================== 00:17:47.828 Critical Warnings: 00:17:47.828 Available Spare Space: OK 00:17:47.828 Temperature: OK 00:17:47.828 Device Reliability: OK 00:17:47.828 Read Only: No 00:17:47.828 Volatile Memory Backup: OK 00:17:47.828 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.828 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.828 Available Spare: 0% 00:17:47.828 Available Sp[2024-11-06 12:24:19.275636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.828 [2024-11-06 12:24:19.283477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.828 [2024-11-06 12:24:19.283517] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:47.828 [2024-11-06 12:24:19.283530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.828 [2024-11-06 12:24:19.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.828 [2024-11-06 12:24:19.283547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.828 [2024-11-06 12:24:19.283555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.828 [2024-11-06 12:24:19.283610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:47.828 [2024-11-06 12:24:19.283624] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:47.828 [2024-11-06 12:24:19.284610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.828 [2024-11-06 12:24:19.284672] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:47.828 [2024-11-06 12:24:19.284686] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:47.828 [2024-11-06 12:24:19.285613] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:47.828 [2024-11-06 12:24:19.285631] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:47.828 [2024-11-06 12:24:19.285688] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:47.829 [2024-11-06 12:24:19.287152] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.829 are Threshold: 0% 00:17:47.829 Life Percentage Used: 0% 00:17:47.829 Data Units Read: 0 00:17:47.829 Data Units Written: 0 00:17:47.829 Host Read Commands: 0 00:17:47.829 Host Write Commands: 0 00:17:47.829 Controller Busy Time: 0 minutes 00:17:47.829 Power Cycles: 0 00:17:47.829 Power On Hours: 0 hours 00:17:47.829 Unsafe Shutdowns: 0 00:17:47.829 Unrecoverable Media Errors: 0 00:17:47.829 Lifetime Error Log Entries: 0 00:17:47.829 Warning Temperature Time: 0 minutes 00:17:47.829 Critical Temperature Time: 0 minutes 00:17:47.829 00:17:47.829 Number of Queues 00:17:47.829 ================ 00:17:47.829 Number of I/O Submission Queues: 127 00:17:47.829 Number of I/O Completion Queues: 127 00:17:47.829 00:17:47.829 Active Namespaces 00:17:47.829 ================= 00:17:47.829 Namespace ID:1 00:17:47.829 Error Recovery Timeout: Unlimited 00:17:47.829 Command Set Identifier: NVM (00h) 00:17:47.829 Deallocate: Supported 00:17:47.829 Deallocated/Unwritten Error: Not Supported 00:17:47.829 Deallocated Read Value: Unknown 00:17:47.829 Deallocate in Write Zeroes: Not Supported 00:17:47.829 Deallocated Guard Field: 0xFFFF 00:17:47.829 Flush: Supported 00:17:47.829 Reservation: Supported 00:17:47.829 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.829 Size (in LBAs): 131072 (0GiB) 00:17:47.829 Capacity (in LBAs): 131072 (0GiB) 00:17:47.829 Utilization (in LBAs): 131072 (0GiB) 00:17:47.829 NGUID: 579701095CD34189ADA1D580A2F9AF8A 00:17:47.829 UUID: 57970109-5cd3-4189-ada1-d580a2f9af8a 00:17:47.829 Thin Provisioning: Not Supported 00:17:47.829 Per-NS Atomic Units: Yes 00:17:47.829 Atomic Boundary Size (Normal): 0 00:17:47.829 Atomic Boundary Size (PFail): 0 00:17:47.829 Atomic Boundary Offset: 0 00:17:47.829 Maximum Single Source Range Length: 65535 00:17:47.829 Maximum Copy Length: 65535 00:17:47.829 Maximum Source Range Count: 1 00:17:47.829 NGUID/EUI64 Never Reused: No 00:17:47.829 Namespace Write Protected: No 00:17:47.829 Number of LBA Formats: 1 00:17:47.829 Current LBA Format: LBA Format #00 00:17:47.829 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.829 00:17:47.829 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:48.086 [2024-11-06 12:24:19.530295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:53.344 Initializing NVMe Controllers 00:17:53.344 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:53.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:53.344 Initialization complete. Launching workers. 00:17:53.344 ======================================================== 00:17:53.344 Latency(us) 00:17:53.344 Device Information : IOPS MiB/s Average min max 00:17:53.344 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39967.72 156.12 3202.41 864.32 7512.50 00:17:53.344 ======================================================== 00:17:53.344 Total : 39967.72 156.12 3202.41 864.32 7512.50 00:17:53.344 00:17:53.345 [2024-11-06 12:24:24.632714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:53.345 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:53.345 [2024-11-06 12:24:24.859425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:58.606 Initializing NVMe Controllers 00:17:58.606 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:58.606 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:58.606 Initialization complete. Launching workers. 00:17:58.606 ======================================================== 00:17:58.606 Latency(us) 00:17:58.606 Device Information : IOPS MiB/s Average min max 00:17:58.606 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25539.80 99.76 5015.64 1279.24 10698.80 00:17:58.606 ======================================================== 00:17:58.606 Total : 25539.80 99.76 5015.64 1279.24 10698.80 00:17:58.606 00:17:58.606 [2024-11-06 12:24:29.883698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:58.606 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:58.606 [2024-11-06 12:24:30.093953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.867 [2024-11-06 12:24:35.232555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.867 Initializing NVMe Controllers 00:18:03.867 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.867 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:03.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:03.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:03.867 Initialization complete. Launching workers. 00:18:03.867 Starting thread on core 2 00:18:03.867 Starting thread on core 3 00:18:03.867 Starting thread on core 1 00:18:03.867 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:04.125 [2024-11-06 12:24:35.571920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:07.407 [2024-11-06 12:24:38.917682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:07.407 Initializing NVMe Controllers 00:18:07.407 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.407 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.407 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:07.407 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:07.407 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:07.407 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:07.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:07.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:07.407 Initialization complete. Launching workers. 00:18:07.407 Starting thread on core 1 with urgent priority queue 00:18:07.407 Starting thread on core 2 with urgent priority queue 00:18:07.407 Starting thread on core 3 with urgent priority queue 00:18:07.407 Starting thread on core 0 with urgent priority queue 00:18:07.407 SPDK bdev Controller (SPDK2 ) core 0: 6788.33 IO/s 14.73 secs/100000 ios 00:18:07.407 SPDK bdev Controller (SPDK2 ) core 1: 11475.67 IO/s 8.71 secs/100000 ios 00:18:07.407 SPDK bdev Controller (SPDK2 ) core 2: 6068.33 IO/s 16.48 secs/100000 ios 00:18:07.407 SPDK bdev Controller (SPDK2 ) core 3: 7163.33 IO/s 13.96 secs/100000 ios 00:18:07.407 ======================================================== 00:18:07.407 00:18:07.407 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:07.665 [2024-11-06 12:24:39.267957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:07.665 Initializing NVMe Controllers 00:18:07.665 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.665 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:07.665 Namespace ID: 1 size: 0GB 00:18:07.665 Initialization complete. 00:18:07.665 INFO: using host memory buffer for IO 00:18:07.665 Hello world! 00:18:07.665 [2024-11-06 12:24:39.280031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:07.923 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:08.180 [2024-11-06 12:24:39.628169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:09.115 Initializing NVMe Controllers 00:18:09.115 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:09.115 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:09.115 Initialization complete. Launching workers. 00:18:09.115 submit (in ns) avg, min, max = 8539.8, 4524.5, 4004428.2 00:18:09.115 complete (in ns) avg, min, max = 22840.1, 2720.9, 4194665.5 00:18:09.115 00:18:09.115 Submit histogram 00:18:09.115 ================ 00:18:09.115 Range in us Cumulative Count 00:18:09.115 4.509 - 4.538: 0.0117% ( 2) 00:18:09.115 4.538 - 4.567: 0.0583% ( 8) 00:18:09.116 4.567 - 4.596: 0.8800% ( 141) 00:18:09.116 4.596 - 4.625: 4.0210% ( 539) 00:18:09.116 4.625 - 4.655: 7.6340% ( 620) 00:18:09.116 4.655 - 4.684: 11.3520% ( 638) 00:18:09.116 4.684 - 4.713: 24.9009% ( 2325) 00:18:09.116 4.713 - 4.742: 39.2249% ( 2458) 00:18:09.116 4.742 - 4.771: 49.5921% ( 1779) 00:18:09.116 4.771 - 4.800: 61.3462% ( 2017) 00:18:09.116 4.800 - 4.829: 71.8007% ( 1794) 00:18:09.116 4.829 - 4.858: 81.0956% ( 1595) 00:18:09.116 4.858 - 4.887: 85.6294% ( 778) 00:18:09.116 4.887 - 4.916: 87.3834% ( 301) 00:18:09.116 4.916 - 4.945: 88.3275% ( 162) 00:18:09.116 4.945 - 4.975: 89.5338% ( 207) 00:18:09.116 4.975 - 5.004: 91.4744% ( 333) 00:18:09.116 5.004 - 5.033: 93.0828% ( 276) 00:18:09.116 5.033 - 5.062: 94.8368% ( 301) 00:18:09.116 5.062 - 5.091: 96.4918% ( 284) 00:18:09.116 5.091 - 5.120: 97.5816% ( 187) 00:18:09.116 5.120 - 5.149: 98.3217% ( 127) 00:18:09.116 5.149 - 5.178: 98.8170% ( 85) 00:18:09.116 5.178 - 5.207: 99.1841% ( 63) 00:18:09.116 5.207 - 5.236: 99.2657% ( 14) 00:18:09.116 5.236 - 5.265: 99.3124% ( 8) 00:18:09.116 5.265 - 5.295: 99.3415% ( 5) 00:18:09.116 5.295 - 5.324: 99.3531% ( 2) 00:18:09.116 5.353 - 5.382: 99.3590% ( 1) 00:18:09.116 5.411 - 5.440: 99.3648% ( 1) 00:18:09.116 5.469 - 5.498: 99.3765% ( 2) 00:18:09.116 5.498 - 5.527: 99.3823% ( 1) 00:18:09.116 5.527 - 5.556: 99.3881% ( 1) 00:18:09.116 5.556 - 5.585: 99.3939% ( 1) 00:18:09.116 5.673 - 5.702: 99.3998% ( 1) 00:18:09.116 5.731 - 5.760: 99.4056% ( 1) 00:18:09.116 5.847 - 5.876: 99.4172% ( 2) 00:18:09.116 5.876 - 5.905: 99.4289% ( 2) 00:18:09.116 5.935 - 5.964: 99.4347% ( 1) 00:18:09.116 5.993 - 6.022: 99.4406% ( 1) 00:18:09.116 6.022 - 6.051: 99.4522% ( 2) 00:18:09.116 6.051 - 6.080: 99.4639% ( 2) 00:18:09.116 6.080 - 6.109: 99.4697% ( 1) 00:18:09.116 6.138 - 6.167: 99.4814% ( 2) 00:18:09.116 6.255 - 6.284: 99.4872% ( 1) 00:18:09.116 6.313 - 6.342: 99.4930% ( 1) 00:18:09.116 6.545 - 6.575: 99.4988% ( 1) 00:18:09.116 7.331 - 7.360: 99.5047% ( 1) 00:18:09.116 7.389 - 7.418: 99.5105% ( 1) 00:18:09.116 7.796 - 7.855: 99.5163% ( 1) 00:18:09.116 7.855 - 7.913: 99.5221% ( 1) 00:18:09.116 7.971 - 8.029: 99.5455% ( 4) 00:18:09.116 8.029 - 8.087: 99.5513% ( 1) 00:18:09.116 8.087 - 8.145: 99.5688% ( 3) 00:18:09.116 8.204 - 8.262: 99.5746% ( 1) 00:18:09.116 8.320 - 8.378: 99.5862% ( 2) 00:18:09.116 8.378 - 8.436: 99.5921% ( 1) 00:18:09.116 8.436 - 8.495: 99.6096% ( 3) 00:18:09.116 8.669 - 8.727: 99.6154% ( 1) 00:18:09.116 8.727 - 8.785: 99.6212% ( 1) 00:18:09.116 8.844 - 8.902: 99.6270% ( 1) 00:18:09.116 8.902 - 8.960: 99.6387% ( 2) 00:18:09.116 8.960 - 9.018: 99.6562% ( 3) 00:18:09.116 9.018 - 9.076: 99.6620% ( 1) 00:18:09.116 9.076 - 9.135: 99.6737% ( 2) 00:18:09.116 9.251 - 9.309: 99.6795% ( 1) 00:18:09.116 9.309 - 9.367: 99.6911% ( 2) 00:18:09.116 9.367 - 9.425: 99.7086% ( 3) 00:18:09.116 9.484 - 9.542: 99.7145% ( 1) 00:18:09.116 9.716 - 9.775: 99.7319% ( 3) 00:18:09.116 9.775 - 9.833: 99.7378% ( 1) 00:18:09.116 9.833 - 9.891: 99.7552% ( 3) 00:18:09.116 9.949 - 10.007: 99.7611% ( 1) 00:18:09.116 10.007 - 10.065: 99.7669% ( 1) 00:18:09.116 10.065 - 10.124: 99.7727% ( 1) 00:18:09.116 10.124 - 10.182: 99.7844% ( 2) 00:18:09.116 10.182 - 10.240: 99.7960% ( 2) 00:18:09.116 10.240 - 10.298: 99.8019% ( 1) 00:18:09.116 10.356 - 10.415: 99.8077% ( 1) 00:18:09.116 10.705 - 10.764: 99.8135% ( 1) 00:18:09.116 10.764 - 10.822: 99.8193% ( 1) 00:18:09.116 10.938 - 10.996: 99.8252% ( 1) 00:18:09.116 11.229 - 11.287: 99.8310% ( 1) 00:18:09.116 11.578 - 11.636: 99.8368% ( 1) 00:18:09.116 11.636 - 11.695: 99.8485% ( 2) 00:18:09.116 11.927 - 11.985: 99.8543% ( 1) 00:18:09.116 11.985 - 12.044: 99.8601% ( 1) 00:18:09.116 12.276 - 12.335: 99.8660% ( 1) 00:18:09.116 13.091 - 13.149: 99.8718% ( 1) 00:18:09.116 14.313 - 14.371: 99.8776% ( 1) 00:18:09.116 14.895 - 15.011: 99.8834% ( 1) 00:18:09.116 15.825 - 15.942: 99.8893% ( 1) 00:18:09.116 16.989 - 17.105: 99.8951% ( 1) 00:18:09.116 19.898 - 20.015: 99.9009% ( 1) 00:18:09.116 20.829 - 20.945: 99.9068% ( 1) 00:18:09.116 3991.738 - 4021.527: 100.0000% ( 16) 00:18:09.116 00:18:09.116 Complete histogram 00:18:09.116 ================== 00:18:09.116 Range in us Cumulative Count 00:18:09.116 2.720 - 2.735: 0.0874% ( 15) 00:18:09.116 2.735 - 2.749: 0.5070% ( 72) 00:18:09.116 2.749 - 2.764: 1.3054% ( 137) 00:18:09.116 2.764 - 2.778: 9.7261% ( 1445) 00:18:09.116 2.778 - 2.793: 48.4324% ( 6642) 00:18:09.116 2.793 - 2.807: 76.4394% ( 4806) 00:18:09.116 2.807 - 2.822: 82.9079% ( 1110) 00:18:09.116 2.822 - 2.836: 87.2902% ( 752) 00:18:09.116 2.836 - 2.851: 89.8951% ( 447) 00:18:09.116 2.851 - 2.865: 92.4126% ( 432) 00:18:09.116 2.865 - 2.880: 95.9674% ( 610) 00:18:09.116 2.880 - 2.895: 98.0128% ( 351) 00:18:09.116 2.895 - 2.909: 98.6597% ( 111) 00:18:09.116 2.909 - 2.924: 98.8054% ( 25) 00:18:09.116 2.924 - 2.938: 98.8520% ( 8) 00:18:09.116 2.938 - 2.953: 98.8578% ( 1) 00:18:09.116 2.967 - 2.982: 98.8869% ( 5) 00:18:09.116 2.982 - 2.996: 98.9044% ( 3) 00:18:09.116 2.996 - 3.011: 98.9103% ( 1) 00:18:09.116 3.011 - 3.025: 98.9161% ( 1) 00:18:09.116 3.069 - 3.084: 98.9219% ( 1) 00:18:09.116 3.113 - 3.127: 98.9277% ( 1) 00:18:09.116 3.156 - 3.171: 98.9336% ( 1) 00:18:09.116 3.171 - 3.185: 98.9394% ( 1) 00:18:09.116 3.244 - 3.258: 98.9452% ( 1) 00:18:09.116 3.258 - 3.273: 98.9510% ( 1) 00:18:09.116 3.287 - 3.302: 98.9569% ( 1) 00:18:09.116 3.331 - 3.345: 98.9627% ( 1) 00:18:09.116 3.360 - 3.375: 98.9744% ( 2) 00:18:09.116 3.375 - 3.389: 98.9802% ( 1) 00:18:09.116 3.389 - 3.404: 98.9860% ( 1) 00:18:09.116 3.404 - 3.418: 98.9918% ( 1) 00:18:09.116 3.462 - 3.476: 99.0035% ( 2) 00:18:09.116 3.505 - 3.520: 99.0093% ( 1) 00:18:09.116 3.520 - 3.535: 99.0152% ( 1) 00:18:09.116 3.564 - 3.578: 99.0210% ( 1) 00:18:09.116 3.607 - 3.622: 99.0268% ( 1) 00:18:09.116 3.651 - 3.665: 99.0326% ( 1) 00:18:09.116 3.724 - 3.753: 99.0443% ( 2) 00:18:09.116 3.811 - 3.840: 99.0501% ( 1) 00:18:09.116 3.840 - 3.869: 99.0618% ( 2) 00:18:09.116 3.869 - 3.898: 99.0676% ( 1) 00:18:09.116 3.985 - 4.015: 99.0734% ( 1) 00:18:09.116 4.044 - 4.073: 99.0793% ( 1) 00:18:09.116 5.207 - 5.236: 99.0851% ( 1) 00:18:09.116 5.265 - 5.295: 99.0909% ( 1) 00:18:09.116 5.469 - 5.498: 99.0967% ( 1) 00:18:09.116 5.556 - 5.585: 99.1026% ( 1) 00:18:09.116 5.585 - 5.615: 99.1084% ( 1) 00:18:09.116 5.731 - 5.760: 99.1142% ( 1) 00:18:09.116 5.789 - 5.818: 99.1200% ( 1) 00:18:09.116 5.818 - 5.847: 99.1259% ( 1) 00:18:09.116 5.876 - 5.905: 99.1317% ( 1) 00:18:09.116 5.905 - 5.935: 99.1434% ( 2) 00:18:09.116 5.993 - 6.022: 99.1492% ( 1) 00:18:09.116 6.022 - 6.051: 99.1550% ( 1) 00:18:09.116 6.196 - 6.225: 99.1608% ( 1) 00:18:09.116 6.225 - 6.255: 99.1667% ( 1) 00:18:09.116 6.255 - 6.284: 99.1725% ( 1) 00:18:09.116 6.342 - 6.371: 99.1783% ( 1) 00:18:09.116 6.429 - 6.458: 99.1900% ( 2) 00:18:09.116 6.516 - 6.545: 99.1958% ( 1) 00:18:09.116 6.575 - 6.604: 99.2075% ( 2) 00:18:09.116 6.633 - 6.662: 99.2133% ( 1) 00:18:09.116 6.662 - 6.691: 99.2191% ( 1) 00:18:09.116 6.749 - 6.778: 99.2366% ( 3) 00:18:09.116 6.807 - 6.836: 99.2541% ( 3) 00:18:09.116 6.865 - 6.895: 99.2599% ( 1) 00:18:09.116 6.953 - 6.9[2024-11-06 12:24:40.725170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:09.375 82: 99.2657% ( 1) 00:18:09.375 6.982 - 7.011: 99.2716% ( 1) 00:18:09.375 7.011 - 7.040: 99.2774% ( 1) 00:18:09.375 7.069 - 7.098: 99.2832% ( 1) 00:18:09.375 7.185 - 7.215: 99.2890% ( 1) 00:18:09.375 7.244 - 7.273: 99.2949% ( 1) 00:18:09.375 7.331 - 7.360: 99.3007% ( 1) 00:18:09.375 7.360 - 7.389: 99.3065% ( 1) 00:18:09.375 7.418 - 7.447: 99.3182% ( 2) 00:18:09.375 7.447 - 7.505: 99.3240% ( 1) 00:18:09.375 7.505 - 7.564: 99.3357% ( 2) 00:18:09.375 7.564 - 7.622: 99.3415% ( 1) 00:18:09.375 7.622 - 7.680: 99.3473% ( 1) 00:18:09.375 7.796 - 7.855: 99.3590% ( 2) 00:18:09.375 7.913 - 7.971: 99.3706% ( 2) 00:18:09.375 7.971 - 8.029: 99.3765% ( 1) 00:18:09.375 8.145 - 8.204: 99.3823% ( 1) 00:18:09.375 8.262 - 8.320: 99.3881% ( 1) 00:18:09.375 8.378 - 8.436: 99.3939% ( 1) 00:18:09.375 8.611 - 8.669: 99.3998% ( 1) 00:18:09.375 8.785 - 8.844: 99.4114% ( 2) 00:18:09.375 8.844 - 8.902: 99.4172% ( 1) 00:18:09.375 9.076 - 9.135: 99.4231% ( 1) 00:18:09.375 9.251 - 9.309: 99.4289% ( 1) 00:18:09.375 9.425 - 9.484: 99.4347% ( 1) 00:18:09.375 9.600 - 9.658: 99.4406% ( 1) 00:18:09.376 10.240 - 10.298: 99.4464% ( 1) 00:18:09.376 10.298 - 10.356: 99.4522% ( 1) 00:18:09.376 10.938 - 10.996: 99.4580% ( 1) 00:18:09.376 11.869 - 11.927: 99.4639% ( 1) 00:18:09.376 11.927 - 11.985: 99.4697% ( 1) 00:18:09.376 12.335 - 12.393: 99.4755% ( 1) 00:18:09.376 13.673 - 13.731: 99.4814% ( 1) 00:18:09.376 16.873 - 16.989: 99.4872% ( 1) 00:18:09.376 17.571 - 17.687: 99.4930% ( 1) 00:18:09.376 19.782 - 19.898: 99.4988% ( 1) 00:18:09.376 3500.218 - 3515.113: 99.5047% ( 1) 00:18:09.376 3991.738 - 4021.527: 99.9825% ( 82) 00:18:09.376 4021.527 - 4051.316: 99.9942% ( 2) 00:18:09.376 4170.473 - 4200.262: 100.0000% ( 1) 00:18:09.376 00:18:09.376 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:09.376 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:09.376 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:09.376 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:09.376 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.635 [ 00:18:09.635 { 00:18:09.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.635 "subtype": "Discovery", 00:18:09.635 "listen_addresses": [], 00:18:09.635 "allow_any_host": true, 00:18:09.635 "hosts": [] 00:18:09.635 }, 00:18:09.635 { 00:18:09.635 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.635 "subtype": "NVMe", 00:18:09.635 "listen_addresses": [ 00:18:09.635 { 00:18:09.635 "trtype": "VFIOUSER", 00:18:09.635 "adrfam": "IPv4", 00:18:09.635 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.635 "trsvcid": "0" 00:18:09.635 } 00:18:09.635 ], 00:18:09.635 "allow_any_host": true, 00:18:09.635 "hosts": [], 00:18:09.635 "serial_number": "SPDK1", 00:18:09.635 "model_number": "SPDK bdev Controller", 00:18:09.635 "max_namespaces": 32, 00:18:09.635 "min_cntlid": 1, 00:18:09.635 "max_cntlid": 65519, 00:18:09.635 "namespaces": [ 00:18:09.635 { 00:18:09.635 "nsid": 1, 00:18:09.635 "bdev_name": "Malloc1", 00:18:09.635 "name": "Malloc1", 00:18:09.635 "nguid": "21CC95A59E9143148AE1BBD55925A472", 00:18:09.635 "uuid": "21cc95a5-9e91-4314-8ae1-bbd55925a472" 00:18:09.635 }, 00:18:09.635 { 00:18:09.635 "nsid": 2, 00:18:09.635 "bdev_name": "Malloc3", 00:18:09.635 "name": "Malloc3", 00:18:09.635 "nguid": "58B9C11FFDBC4E13B315D2C227A1230E", 00:18:09.635 "uuid": "58b9c11f-fdbc-4e13-b315-d2c227a1230e" 00:18:09.635 } 00:18:09.635 ] 00:18:09.635 }, 00:18:09.635 { 00:18:09.635 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.635 "subtype": "NVMe", 00:18:09.635 "listen_addresses": [ 00:18:09.635 { 00:18:09.635 "trtype": "VFIOUSER", 00:18:09.635 "adrfam": "IPv4", 00:18:09.635 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.635 "trsvcid": "0" 00:18:09.635 } 00:18:09.635 ], 00:18:09.635 "allow_any_host": true, 00:18:09.635 "hosts": [], 00:18:09.635 "serial_number": "SPDK2", 00:18:09.635 "model_number": "SPDK bdev Controller", 00:18:09.635 "max_namespaces": 32, 00:18:09.635 "min_cntlid": 1, 00:18:09.635 "max_cntlid": 65519, 00:18:09.635 "namespaces": [ 00:18:09.635 { 00:18:09.635 "nsid": 1, 00:18:09.635 "bdev_name": "Malloc2", 00:18:09.635 "name": "Malloc2", 00:18:09.635 "nguid": "579701095CD34189ADA1D580A2F9AF8A", 00:18:09.635 "uuid": "57970109-5cd3-4189-ada1-d580a2f9af8a" 00:18:09.635 } 00:18:09.635 ] 00:18:09.635 } 00:18:09.635 ] 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=146388 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:09.635 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:09.894 [2024-11-06 12:24:41.270154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:09.894 Malloc4 00:18:09.894 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:10.153 [2024-11-06 12:24:41.625657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:10.153 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:10.153 Asynchronous Event Request test 00:18:10.153 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:10.153 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:10.153 Registering asynchronous event callbacks... 00:18:10.153 Starting namespace attribute notice tests for all controllers... 00:18:10.153 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:10.153 aer_cb - Changed Namespace 00:18:10.153 Cleaning up... 00:18:10.412 [ 00:18:10.412 { 00:18:10.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:10.412 "subtype": "Discovery", 00:18:10.412 "listen_addresses": [], 00:18:10.412 "allow_any_host": true, 00:18:10.412 "hosts": [] 00:18:10.412 }, 00:18:10.412 { 00:18:10.412 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:10.412 "subtype": "NVMe", 00:18:10.412 "listen_addresses": [ 00:18:10.412 { 00:18:10.412 "trtype": "VFIOUSER", 00:18:10.412 "adrfam": "IPv4", 00:18:10.412 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:10.412 "trsvcid": "0" 00:18:10.412 } 00:18:10.412 ], 00:18:10.412 "allow_any_host": true, 00:18:10.412 "hosts": [], 00:18:10.412 "serial_number": "SPDK1", 00:18:10.412 "model_number": "SPDK bdev Controller", 00:18:10.412 "max_namespaces": 32, 00:18:10.412 "min_cntlid": 1, 00:18:10.412 "max_cntlid": 65519, 00:18:10.412 "namespaces": [ 00:18:10.412 { 00:18:10.412 "nsid": 1, 00:18:10.412 "bdev_name": "Malloc1", 00:18:10.412 "name": "Malloc1", 00:18:10.412 "nguid": "21CC95A59E9143148AE1BBD55925A472", 00:18:10.412 "uuid": "21cc95a5-9e91-4314-8ae1-bbd55925a472" 00:18:10.412 }, 00:18:10.412 { 00:18:10.412 "nsid": 2, 00:18:10.412 "bdev_name": "Malloc3", 00:18:10.412 "name": "Malloc3", 00:18:10.412 "nguid": "58B9C11FFDBC4E13B315D2C227A1230E", 00:18:10.412 "uuid": "58b9c11f-fdbc-4e13-b315-d2c227a1230e" 00:18:10.412 } 00:18:10.412 ] 00:18:10.412 }, 00:18:10.412 { 00:18:10.412 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:10.412 "subtype": "NVMe", 00:18:10.412 "listen_addresses": [ 00:18:10.412 { 00:18:10.412 "trtype": "VFIOUSER", 00:18:10.412 "adrfam": "IPv4", 00:18:10.412 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:10.412 "trsvcid": "0" 00:18:10.412 } 00:18:10.412 ], 00:18:10.412 "allow_any_host": true, 00:18:10.412 "hosts": [], 00:18:10.412 "serial_number": "SPDK2", 00:18:10.412 "model_number": "SPDK bdev Controller", 00:18:10.412 "max_namespaces": 32, 00:18:10.412 "min_cntlid": 1, 00:18:10.412 "max_cntlid": 65519, 00:18:10.412 "namespaces": [ 00:18:10.412 { 00:18:10.412 "nsid": 1, 00:18:10.412 "bdev_name": "Malloc2", 00:18:10.412 "name": "Malloc2", 00:18:10.412 "nguid": "579701095CD34189ADA1D580A2F9AF8A", 00:18:10.412 "uuid": "57970109-5cd3-4189-ada1-d580a2f9af8a" 00:18:10.412 }, 00:18:10.412 { 00:18:10.412 "nsid": 2, 00:18:10.412 "bdev_name": "Malloc4", 00:18:10.412 "name": "Malloc4", 00:18:10.412 "nguid": "6CA87E99A77C4AAF9B78F895E606F9B6", 00:18:10.412 "uuid": "6ca87e99-a77c-4aaf-9b78-f895e606f9b6" 00:18:10.412 } 00:18:10.412 ] 00:18:10.412 } 00:18:10.412 ] 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 146388 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 137374 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 137374 ']' 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 137374 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 137374 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 137374' 00:18:10.412 killing process with pid 137374 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 137374 00:18:10.412 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 137374 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=146588 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 146588' 00:18:10.671 Process pid: 146588 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 146588 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 146588 ']' 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.671 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:10.671 [2024-11-06 12:24:42.281841] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:10.671 [2024-11-06 12:24:42.283112] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:18:10.671 [2024-11-06 12:24:42.283164] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.930 [2024-11-06 12:24:42.376563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.930 [2024-11-06 12:24:42.426586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.930 [2024-11-06 12:24:42.426631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.930 [2024-11-06 12:24:42.426642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.930 [2024-11-06 12:24:42.426650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.930 [2024-11-06 12:24:42.426658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.930 [2024-11-06 12:24:42.428610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.930 [2024-11-06 12:24:42.428714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.930 [2024-11-06 12:24:42.428843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.930 [2024-11-06 12:24:42.428845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.930 [2024-11-06 12:24:42.503805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:10.930 [2024-11-06 12:24:42.504052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:10.930 [2024-11-06 12:24:42.504217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:10.930 [2024-11-06 12:24:42.504636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:10.930 [2024-11-06 12:24:42.504904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:11.866 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:11.866 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:18:11.866 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:12.802 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:13.062 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:13.062 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:13.062 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:13.062 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:13.062 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:13.321 Malloc1 00:18:13.321 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:13.579 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:13.839 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:14.098 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:14.098 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:14.098 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:14.356 Malloc2 00:18:14.356 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:14.616 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:14.875 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 146588 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 146588 ']' 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 146588 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.133 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 146588 00:18:15.134 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.134 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.134 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 146588' 00:18:15.134 killing process with pid 146588 00:18:15.134 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 146588 00:18:15.134 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 146588 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:15.392 00:18:15.392 real 0m54.239s 00:18:15.392 user 3m27.881s 00:18:15.392 sys 0m3.663s 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:15.392 ************************************ 00:18:15.392 END TEST nvmf_vfio_user 00:18:15.392 ************************************ 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:15.392 12:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.652 ************************************ 00:18:15.652 START TEST nvmf_vfio_user_nvme_compliance 00:18:15.652 ************************************ 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:15.652 * Looking for test storage... 00:18:15.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:15.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.652 --rc genhtml_branch_coverage=1 00:18:15.652 --rc genhtml_function_coverage=1 00:18:15.652 --rc genhtml_legend=1 00:18:15.652 --rc geninfo_all_blocks=1 00:18:15.652 --rc geninfo_unexecuted_blocks=1 00:18:15.652 00:18:15.652 ' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:15.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.652 --rc genhtml_branch_coverage=1 00:18:15.652 --rc genhtml_function_coverage=1 00:18:15.652 --rc genhtml_legend=1 00:18:15.652 --rc geninfo_all_blocks=1 00:18:15.652 --rc geninfo_unexecuted_blocks=1 00:18:15.652 00:18:15.652 ' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:15.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.652 --rc genhtml_branch_coverage=1 00:18:15.652 --rc genhtml_function_coverage=1 00:18:15.652 --rc genhtml_legend=1 00:18:15.652 --rc geninfo_all_blocks=1 00:18:15.652 --rc geninfo_unexecuted_blocks=1 00:18:15.652 00:18:15.652 ' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:15.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.652 --rc genhtml_branch_coverage=1 00:18:15.652 --rc genhtml_function_coverage=1 00:18:15.652 --rc genhtml_legend=1 00:18:15.652 --rc geninfo_all_blocks=1 00:18:15.652 --rc geninfo_unexecuted_blocks=1 00:18:15.652 00:18:15.652 ' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.652 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=147708 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 147708' 00:18:15.653 Process pid: 147708 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 147708 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 147708 ']' 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.653 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.912 [2024-11-06 12:24:47.309277] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:18:15.912 [2024-11-06 12:24:47.309342] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.912 [2024-11-06 12:24:47.402670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.912 [2024-11-06 12:24:47.452163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.912 [2024-11-06 12:24:47.452203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.912 [2024-11-06 12:24:47.452213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.912 [2024-11-06 12:24:47.452222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.912 [2024-11-06 12:24:47.452230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.912 [2024-11-06 12:24:47.453990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.912 [2024-11-06 12:24:47.454018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.912 [2024-11-06 12:24:47.454022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.170 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:16.171 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:18:16.171 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.107 malloc0 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.107 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:17.366 00:18:17.366 00:18:17.366 CUnit - A unit testing framework for C - Version 2.1-3 00:18:17.366 http://cunit.sourceforge.net/ 00:18:17.366 00:18:17.366 00:18:17.366 Suite: nvme_compliance 00:18:17.366 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 12:24:48.840006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.366 [2024-11-06 12:24:48.841399] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:17.366 [2024-11-06 12:24:48.841414] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:17.366 [2024-11-06 12:24:48.841420] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:17.366 [2024-11-06 12:24:48.843026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.366 passed 00:18:17.366 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 12:24:48.943742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.366 [2024-11-06 12:24:48.946757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.623 passed 00:18:17.623 Test: admin_identify_ns ...[2024-11-06 12:24:49.048701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.623 [2024-11-06 12:24:49.108470] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:17.623 [2024-11-06 12:24:49.116474] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:17.623 [2024-11-06 12:24:49.137598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.623 passed 00:18:17.623 Test: admin_get_features_mandatory_features ...[2024-11-06 12:24:49.235491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.623 [2024-11-06 12:24:49.238506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.881 passed 00:18:17.881 Test: admin_get_features_optional_features ...[2024-11-06 12:24:49.340161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.881 [2024-11-06 12:24:49.343180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.881 passed 00:18:17.881 Test: admin_set_features_number_of_queues ...[2024-11-06 12:24:49.440003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.140 [2024-11-06 12:24:49.545577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.140 passed 00:18:18.140 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 12:24:49.642209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.140 [2024-11-06 12:24:49.645237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.140 passed 00:18:18.140 Test: admin_get_log_page_with_lpo ...[2024-11-06 12:24:49.744255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.398 [2024-11-06 12:24:49.812477] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:18.398 [2024-11-06 12:24:49.825542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.398 passed 00:18:18.398 Test: fabric_property_get ...[2024-11-06 12:24:49.922214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.398 [2024-11-06 12:24:49.923507] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:18.398 [2024-11-06 12:24:49.925236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.398 passed 00:18:18.657 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 12:24:50.022887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.657 [2024-11-06 12:24:50.024178] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:18.657 [2024-11-06 12:24:50.025909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.657 passed 00:18:18.657 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 12:24:50.125253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.657 [2024-11-06 12:24:50.211473] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:18.657 [2024-11-06 12:24:50.227474] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:18.657 [2024-11-06 12:24:50.232662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.657 passed 00:18:18.974 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 12:24:50.328354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.974 [2024-11-06 12:24:50.329641] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:18.974 [2024-11-06 12:24:50.331375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.974 passed 00:18:18.974 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 12:24:50.431253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.974 [2024-11-06 12:24:50.506466] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:18.974 [2024-11-06 12:24:50.530470] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:18.974 [2024-11-06 12:24:50.535582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:18.974 passed 00:18:19.232 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 12:24:50.632231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:19.232 [2024-11-06 12:24:50.633523] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:19.232 [2024-11-06 12:24:50.633549] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:19.232 [2024-11-06 12:24:50.635247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.232 passed 00:18:19.232 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 12:24:50.734074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:19.232 [2024-11-06 12:24:50.825470] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:19.232 [2024-11-06 12:24:50.833468] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:19.232 [2024-11-06 12:24:50.841466] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:19.232 [2024-11-06 12:24:50.849470] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:19.491 [2024-11-06 12:24:50.878575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.491 passed 00:18:19.491 Test: admin_create_io_sq_verify_pc ...[2024-11-06 12:24:50.976219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:19.491 [2024-11-06 12:24:50.994476] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:19.491 [2024-11-06 12:24:51.012405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.491 passed 00:18:19.749 Test: admin_create_io_qp_max_qps ...[2024-11-06 12:24:51.109055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:20.682 [2024-11-06 12:24:52.204477] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:21.248 [2024-11-06 12:24:52.592228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:21.248 passed 00:18:21.248 Test: admin_create_io_sq_shared_cq ...[2024-11-06 12:24:52.690273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:21.248 [2024-11-06 12:24:52.822467] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:21.248 [2024-11-06 12:24:52.859550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:21.506 passed 00:18:21.506 00:18:21.506 Run Summary: Type Total Ran Passed Failed Inactive 00:18:21.506 suites 1 1 n/a 0 0 00:18:21.506 tests 18 18 18 0 0 00:18:21.506 asserts 360 360 360 0 n/a 00:18:21.506 00:18:21.506 Elapsed time = 1.694 seconds 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 147708 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 147708 ']' 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 147708 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 147708 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 147708' 00:18:21.506 killing process with pid 147708 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 147708 00:18:21.506 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 147708 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:21.765 00:18:21.765 real 0m6.153s 00:18:21.765 user 0m17.163s 00:18:21.765 sys 0m0.568s 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:21.765 ************************************ 00:18:21.765 END TEST nvmf_vfio_user_nvme_compliance 00:18:21.765 ************************************ 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.765 ************************************ 00:18:21.765 START TEST nvmf_vfio_user_fuzz 00:18:21.765 ************************************ 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:21.765 * Looking for test storage... 00:18:21.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:18:21.765 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.024 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.025 --rc genhtml_branch_coverage=1 00:18:22.025 --rc genhtml_function_coverage=1 00:18:22.025 --rc genhtml_legend=1 00:18:22.025 --rc geninfo_all_blocks=1 00:18:22.025 --rc geninfo_unexecuted_blocks=1 00:18:22.025 00:18:22.025 ' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.025 --rc genhtml_branch_coverage=1 00:18:22.025 --rc genhtml_function_coverage=1 00:18:22.025 --rc genhtml_legend=1 00:18:22.025 --rc geninfo_all_blocks=1 00:18:22.025 --rc geninfo_unexecuted_blocks=1 00:18:22.025 00:18:22.025 ' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.025 --rc genhtml_branch_coverage=1 00:18:22.025 --rc genhtml_function_coverage=1 00:18:22.025 --rc genhtml_legend=1 00:18:22.025 --rc geninfo_all_blocks=1 00:18:22.025 --rc geninfo_unexecuted_blocks=1 00:18:22.025 00:18:22.025 ' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.025 --rc genhtml_branch_coverage=1 00:18:22.025 --rc genhtml_function_coverage=1 00:18:22.025 --rc genhtml_legend=1 00:18:22.025 --rc geninfo_all_blocks=1 00:18:22.025 --rc geninfo_unexecuted_blocks=1 00:18:22.025 00:18:22.025 ' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=148824 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 148824' 00:18:22.025 Process pid: 148824 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 148824 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 148824 ']' 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.025 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:22.284 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.284 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:18:22.284 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:23.218 malloc0 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.218 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:23.476 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:55.557 Fuzzing completed. Shutting down the fuzz application 00:18:55.557 00:18:55.557 Dumping successful admin opcodes: 00:18:55.557 8, 9, 10, 24, 00:18:55.557 Dumping successful io opcodes: 00:18:55.557 0, 00:18:55.557 NS: 0x20000081ef00 I/O qp, Total commands completed: 904683, total successful commands: 3531, random_seed: 968367872 00:18:55.557 NS: 0x20000081ef00 admin qp, Total commands completed: 113189, total successful commands: 927, random_seed: 1512315456 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 148824 ']' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 148824' 00:18:55.557 killing process with pid 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 148824 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:55.557 00:18:55.557 real 0m33.263s 00:18:55.557 user 0m38.698s 00:18:55.557 sys 0m25.182s 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:55.557 ************************************ 00:18:55.557 END TEST nvmf_vfio_user_fuzz 00:18:55.557 ************************************ 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.557 ************************************ 00:18:55.557 START TEST nvmf_auth_target 00:18:55.557 ************************************ 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:55.557 * Looking for test storage... 00:18:55.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:55.557 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.558 --rc genhtml_branch_coverage=1 00:18:55.558 --rc genhtml_function_coverage=1 00:18:55.558 --rc genhtml_legend=1 00:18:55.558 --rc geninfo_all_blocks=1 00:18:55.558 --rc geninfo_unexecuted_blocks=1 00:18:55.558 00:18:55.558 ' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.558 --rc genhtml_branch_coverage=1 00:18:55.558 --rc genhtml_function_coverage=1 00:18:55.558 --rc genhtml_legend=1 00:18:55.558 --rc geninfo_all_blocks=1 00:18:55.558 --rc geninfo_unexecuted_blocks=1 00:18:55.558 00:18:55.558 ' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.558 --rc genhtml_branch_coverage=1 00:18:55.558 --rc genhtml_function_coverage=1 00:18:55.558 --rc genhtml_legend=1 00:18:55.558 --rc geninfo_all_blocks=1 00:18:55.558 --rc geninfo_unexecuted_blocks=1 00:18:55.558 00:18:55.558 ' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:55.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.558 --rc genhtml_branch_coverage=1 00:18:55.558 --rc genhtml_function_coverage=1 00:18:55.558 --rc genhtml_legend=1 00:18:55.558 --rc geninfo_all_blocks=1 00:18:55.558 --rc geninfo_unexecuted_blocks=1 00:18:55.558 00:18:55.558 ' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.558 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:00.827 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:00.827 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:00.827 Found net devices under 0000:af:00.0: cvl_0_0 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:00.827 Found net devices under 0000:af:00.1: cvl_0_1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:00.827 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.827 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:00.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:19:00.828 00:19:00.828 --- 10.0.0.2 ping statistics --- 00:19:00.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.828 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:19:00.828 00:19:00.828 --- 10.0.0.1 ping statistics --- 00:19:00.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.828 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=157742 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 157742 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 157742 ']' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.828 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=157791 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=96288a05f6fdc19add3472c67a35533383a17c04c0f80552 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Oxc 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 96288a05f6fdc19add3472c67a35533383a17c04c0f80552 0 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 96288a05f6fdc19add3472c67a35533383a17c04c0f80552 0 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=96288a05f6fdc19add3472c67a35533383a17c04c0f80552 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Oxc 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Oxc 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Oxc 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e0365938929c7a736ffc2ad4cb84183d47bb316b347e640c039e5cabf91b2a5 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kIv 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e0365938929c7a736ffc2ad4cb84183d47bb316b347e640c039e5cabf91b2a5 3 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e0365938929c7a736ffc2ad4cb84183d47bb316b347e640c039e5cabf91b2a5 3 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e0365938929c7a736ffc2ad4cb84183d47bb316b347e640c039e5cabf91b2a5 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kIv 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kIv 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.kIv 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.087 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3e403d14d63703245c29e69775c8075 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VhD 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3e403d14d63703245c29e69775c8075 1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3e403d14d63703245c29e69775c8075 1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3e403d14d63703245c29e69775c8075 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VhD 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VhD 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.VhD 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a4bc9a1fd574ba62268d42ad5ba05c616b47bb5aed5ccb8 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pY1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a4bc9a1fd574ba62268d42ad5ba05c616b47bb5aed5ccb8 2 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a4bc9a1fd574ba62268d42ad5ba05c616b47bb5aed5ccb8 2 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a4bc9a1fd574ba62268d42ad5ba05c616b47bb5aed5ccb8 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.088 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pY1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pY1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.pY1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6dd942408d37c3086ab7a753406e5ece62e79d8da4919bce 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iFo 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6dd942408d37c3086ab7a753406e5ece62e79d8da4919bce 2 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6dd942408d37c3086ab7a753406e5ece62e79d8da4919bce 2 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6dd942408d37c3086ab7a753406e5ece62e79d8da4919bce 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iFo 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iFo 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.iFo 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74d2b146ba3e4426fc1f31ba4dc68c24 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.K2c 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74d2b146ba3e4426fc1f31ba4dc68c24 1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74d2b146ba3e4426fc1f31ba4dc68c24 1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74d2b146ba3e4426fc1f31ba4dc68c24 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.K2c 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.K2c 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.K2c 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=40babf6402b8f407efc0dc174510af7f3d92b6a9e2aaff89068ac80d77476884 00:19:01.347 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ka4 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 40babf6402b8f407efc0dc174510af7f3d92b6a9e2aaff89068ac80d77476884 3 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 40babf6402b8f407efc0dc174510af7f3d92b6a9e2aaff89068ac80d77476884 3 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=40babf6402b8f407efc0dc174510af7f3d92b6a9e2aaff89068ac80d77476884 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ka4 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ka4 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ka4 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 157742 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 157742 ']' 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.348 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 157791 /var/tmp/host.sock 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 157791 ']' 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:01.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.606 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Oxc 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Oxc 00:19:01.865 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Oxc 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.kIv ]] 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIv 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIv 00:19:02.433 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIv 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VhD 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VhD 00:19:02.433 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VhD 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.pY1 ]] 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pY1 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pY1 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pY1 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iFo 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iFo 00:19:03.000 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iFo 00:19:03.259 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.K2c ]] 00:19:03.259 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K2c 00:19:03.259 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.259 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.517 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K2c 00:19:03.517 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K2c 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ka4 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ka4 00:19:03.776 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ka4 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.035 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.294 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.553 00:19:04.553 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.553 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.553 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.812 { 00:19:04.812 "cntlid": 1, 00:19:04.812 "qid": 0, 00:19:04.812 "state": "enabled", 00:19:04.812 "thread": "nvmf_tgt_poll_group_000", 00:19:04.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:04.812 "listen_address": { 00:19:04.812 "trtype": "TCP", 00:19:04.812 "adrfam": "IPv4", 00:19:04.812 "traddr": "10.0.0.2", 00:19:04.812 "trsvcid": "4420" 00:19:04.812 }, 00:19:04.812 "peer_address": { 00:19:04.812 "trtype": "TCP", 00:19:04.812 "adrfam": "IPv4", 00:19:04.812 "traddr": "10.0.0.1", 00:19:04.812 "trsvcid": "47524" 00:19:04.812 }, 00:19:04.812 "auth": { 00:19:04.812 "state": "completed", 00:19:04.812 "digest": "sha256", 00:19:04.812 "dhgroup": "null" 00:19:04.812 } 00:19:04.812 } 00:19:04.812 ]' 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.812 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.070 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:05.070 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.070 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.070 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.071 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.329 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:05.329 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.266 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.526 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.526 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.526 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.526 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.784 00:19:06.785 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.785 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.785 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.043 { 00:19:07.043 "cntlid": 3, 00:19:07.043 "qid": 0, 00:19:07.043 "state": "enabled", 00:19:07.043 "thread": "nvmf_tgt_poll_group_000", 00:19:07.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:07.043 "listen_address": { 00:19:07.043 "trtype": "TCP", 00:19:07.043 "adrfam": "IPv4", 00:19:07.043 "traddr": "10.0.0.2", 00:19:07.043 "trsvcid": "4420" 00:19:07.043 }, 00:19:07.043 "peer_address": { 00:19:07.043 "trtype": "TCP", 00:19:07.043 "adrfam": "IPv4", 00:19:07.043 "traddr": "10.0.0.1", 00:19:07.043 "trsvcid": "47548" 00:19:07.043 }, 00:19:07.043 "auth": { 00:19:07.043 "state": "completed", 00:19:07.043 "digest": "sha256", 00:19:07.043 "dhgroup": "null" 00:19:07.043 } 00:19:07.043 } 00:19:07.043 ]' 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.043 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.611 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:07.611 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.180 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.498 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.801 00:19:08.802 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.802 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.802 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.078 { 00:19:09.078 "cntlid": 5, 00:19:09.078 "qid": 0, 00:19:09.078 "state": "enabled", 00:19:09.078 "thread": "nvmf_tgt_poll_group_000", 00:19:09.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:09.078 "listen_address": { 00:19:09.078 "trtype": "TCP", 00:19:09.078 "adrfam": "IPv4", 00:19:09.078 "traddr": "10.0.0.2", 00:19:09.078 "trsvcid": "4420" 00:19:09.078 }, 00:19:09.078 "peer_address": { 00:19:09.078 "trtype": "TCP", 00:19:09.078 "adrfam": "IPv4", 00:19:09.078 "traddr": "10.0.0.1", 00:19:09.078 "trsvcid": "47584" 00:19:09.078 }, 00:19:09.078 "auth": { 00:19:09.078 "state": "completed", 00:19:09.078 "digest": "sha256", 00:19:09.078 "dhgroup": "null" 00:19:09.078 } 00:19:09.078 } 00:19:09.078 ]' 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.078 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.366 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.366 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.366 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.366 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.366 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.639 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:09.639 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.205 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.463 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.721 00:19:10.980 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.980 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.980 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.238 { 00:19:11.238 "cntlid": 7, 00:19:11.238 "qid": 0, 00:19:11.238 "state": "enabled", 00:19:11.238 "thread": "nvmf_tgt_poll_group_000", 00:19:11.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:11.238 "listen_address": { 00:19:11.238 "trtype": "TCP", 00:19:11.238 "adrfam": "IPv4", 00:19:11.238 "traddr": "10.0.0.2", 00:19:11.238 "trsvcid": "4420" 00:19:11.238 }, 00:19:11.238 "peer_address": { 00:19:11.238 "trtype": "TCP", 00:19:11.238 "adrfam": "IPv4", 00:19:11.238 "traddr": "10.0.0.1", 00:19:11.238 "trsvcid": "35736" 00:19:11.238 }, 00:19:11.238 "auth": { 00:19:11.238 "state": "completed", 00:19:11.238 "digest": "sha256", 00:19:11.238 "dhgroup": "null" 00:19:11.238 } 00:19:11.238 } 00:19:11.238 ]' 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.238 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.496 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:11.496 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.431 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.689 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.948 00:19:12.948 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.948 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.948 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.206 { 00:19:13.206 "cntlid": 9, 00:19:13.206 "qid": 0, 00:19:13.206 "state": "enabled", 00:19:13.206 "thread": "nvmf_tgt_poll_group_000", 00:19:13.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:13.206 "listen_address": { 00:19:13.206 "trtype": "TCP", 00:19:13.206 "adrfam": "IPv4", 00:19:13.206 "traddr": "10.0.0.2", 00:19:13.206 "trsvcid": "4420" 00:19:13.206 }, 00:19:13.206 "peer_address": { 00:19:13.206 "trtype": "TCP", 00:19:13.206 "adrfam": "IPv4", 00:19:13.206 "traddr": "10.0.0.1", 00:19:13.206 "trsvcid": "35766" 00:19:13.206 }, 00:19:13.206 "auth": { 00:19:13.206 "state": "completed", 00:19:13.206 "digest": "sha256", 00:19:13.206 "dhgroup": "ffdhe2048" 00:19:13.206 } 00:19:13.206 } 00:19:13.206 ]' 00:19:13.206 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.464 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.723 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:13.723 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:14.289 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.547 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.806 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.065 00:19:15.065 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.065 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.065 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.323 { 00:19:15.323 "cntlid": 11, 00:19:15.323 "qid": 0, 00:19:15.323 "state": "enabled", 00:19:15.323 "thread": "nvmf_tgt_poll_group_000", 00:19:15.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:15.323 "listen_address": { 00:19:15.323 "trtype": "TCP", 00:19:15.323 "adrfam": "IPv4", 00:19:15.323 "traddr": "10.0.0.2", 00:19:15.323 "trsvcid": "4420" 00:19:15.323 }, 00:19:15.323 "peer_address": { 00:19:15.323 "trtype": "TCP", 00:19:15.323 "adrfam": "IPv4", 00:19:15.323 "traddr": "10.0.0.1", 00:19:15.323 "trsvcid": "35784" 00:19:15.323 }, 00:19:15.323 "auth": { 00:19:15.323 "state": "completed", 00:19:15.323 "digest": "sha256", 00:19:15.323 "dhgroup": "ffdhe2048" 00:19:15.323 } 00:19:15.323 } 00:19:15.323 ]' 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.323 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.581 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.581 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.581 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.839 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:15.839 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:16.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:16.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.405 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.405 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.405 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.405 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.663 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.922 00:19:16.922 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.922 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.922 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.180 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.180 { 00:19:17.180 "cntlid": 13, 00:19:17.180 "qid": 0, 00:19:17.180 "state": "enabled", 00:19:17.180 "thread": "nvmf_tgt_poll_group_000", 00:19:17.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:17.181 "listen_address": { 00:19:17.181 "trtype": "TCP", 00:19:17.181 "adrfam": "IPv4", 00:19:17.181 "traddr": "10.0.0.2", 00:19:17.181 "trsvcid": "4420" 00:19:17.181 }, 00:19:17.181 "peer_address": { 00:19:17.181 "trtype": "TCP", 00:19:17.181 "adrfam": "IPv4", 00:19:17.181 "traddr": "10.0.0.1", 00:19:17.181 "trsvcid": "35820" 00:19:17.181 }, 00:19:17.181 "auth": { 00:19:17.181 "state": "completed", 00:19:17.181 "digest": "sha256", 00:19:17.181 "dhgroup": "ffdhe2048" 00:19:17.181 } 00:19:17.181 } 00:19:17.181 ]' 00:19:17.181 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.181 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.181 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.439 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.439 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.439 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.439 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.439 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.697 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:17.697 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.629 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.888 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.146 00:19:19.146 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.146 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.146 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.404 { 00:19:19.404 "cntlid": 15, 00:19:19.404 "qid": 0, 00:19:19.404 "state": "enabled", 00:19:19.404 "thread": "nvmf_tgt_poll_group_000", 00:19:19.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:19.404 "listen_address": { 00:19:19.404 "trtype": "TCP", 00:19:19.404 "adrfam": "IPv4", 00:19:19.404 "traddr": "10.0.0.2", 00:19:19.404 "trsvcid": "4420" 00:19:19.404 }, 00:19:19.404 "peer_address": { 00:19:19.404 "trtype": "TCP", 00:19:19.404 "adrfam": "IPv4", 00:19:19.404 "traddr": "10.0.0.1", 00:19:19.404 "trsvcid": "35844" 00:19:19.404 }, 00:19:19.404 "auth": { 00:19:19.404 "state": "completed", 00:19:19.404 "digest": "sha256", 00:19:19.404 "dhgroup": "ffdhe2048" 00:19:19.404 } 00:19:19.404 } 00:19:19.404 ]' 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.404 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.663 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:19.663 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.598 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.164 00:19:21.164 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.164 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.164 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.422 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.422 { 00:19:21.422 "cntlid": 17, 00:19:21.422 "qid": 0, 00:19:21.423 "state": "enabled", 00:19:21.423 "thread": "nvmf_tgt_poll_group_000", 00:19:21.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:21.423 "listen_address": { 00:19:21.423 "trtype": "TCP", 00:19:21.423 "adrfam": "IPv4", 00:19:21.423 "traddr": "10.0.0.2", 00:19:21.423 "trsvcid": "4420" 00:19:21.423 }, 00:19:21.423 "peer_address": { 00:19:21.423 "trtype": "TCP", 00:19:21.423 "adrfam": "IPv4", 00:19:21.423 "traddr": "10.0.0.1", 00:19:21.423 "trsvcid": "34222" 00:19:21.423 }, 00:19:21.423 "auth": { 00:19:21.423 "state": "completed", 00:19:21.423 "digest": "sha256", 00:19:21.423 "dhgroup": "ffdhe3072" 00:19:21.423 } 00:19:21.423 } 00:19:21.423 ]' 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.423 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.681 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:21.681 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.615 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.615 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.873 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.873 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.873 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.873 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.131 00:19:23.132 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.132 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.132 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.389 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.390 { 00:19:23.390 "cntlid": 19, 00:19:23.390 "qid": 0, 00:19:23.390 "state": "enabled", 00:19:23.390 "thread": "nvmf_tgt_poll_group_000", 00:19:23.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:23.390 "listen_address": { 00:19:23.390 "trtype": "TCP", 00:19:23.390 "adrfam": "IPv4", 00:19:23.390 "traddr": "10.0.0.2", 00:19:23.390 "trsvcid": "4420" 00:19:23.390 }, 00:19:23.390 "peer_address": { 00:19:23.390 "trtype": "TCP", 00:19:23.390 "adrfam": "IPv4", 00:19:23.390 "traddr": "10.0.0.1", 00:19:23.390 "trsvcid": "34240" 00:19:23.390 }, 00:19:23.390 "auth": { 00:19:23.390 "state": "completed", 00:19:23.390 "digest": "sha256", 00:19:23.390 "dhgroup": "ffdhe3072" 00:19:23.390 } 00:19:23.390 } 00:19:23.390 ]' 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.390 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.648 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.648 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.648 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.906 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:23.906 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.472 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.730 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.297 00:19:25.297 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.297 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.297 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.556 { 00:19:25.556 "cntlid": 21, 00:19:25.556 "qid": 0, 00:19:25.556 "state": "enabled", 00:19:25.556 "thread": "nvmf_tgt_poll_group_000", 00:19:25.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:25.556 "listen_address": { 00:19:25.556 "trtype": "TCP", 00:19:25.556 "adrfam": "IPv4", 00:19:25.556 "traddr": "10.0.0.2", 00:19:25.556 "trsvcid": "4420" 00:19:25.556 }, 00:19:25.556 "peer_address": { 00:19:25.556 "trtype": "TCP", 00:19:25.556 "adrfam": "IPv4", 00:19:25.556 "traddr": "10.0.0.1", 00:19:25.556 "trsvcid": "34274" 00:19:25.556 }, 00:19:25.556 "auth": { 00:19:25.556 "state": "completed", 00:19:25.556 "digest": "sha256", 00:19:25.556 "dhgroup": "ffdhe3072" 00:19:25.556 } 00:19:25.556 } 00:19:25.556 ]' 00:19:25.556 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.556 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.815 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:25.815 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.749 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.008 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.266 00:19:27.266 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.266 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.266 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.525 { 00:19:27.525 "cntlid": 23, 00:19:27.525 "qid": 0, 00:19:27.525 "state": "enabled", 00:19:27.525 "thread": "nvmf_tgt_poll_group_000", 00:19:27.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:27.525 "listen_address": { 00:19:27.525 "trtype": "TCP", 00:19:27.525 "adrfam": "IPv4", 00:19:27.525 "traddr": "10.0.0.2", 00:19:27.525 "trsvcid": "4420" 00:19:27.525 }, 00:19:27.525 "peer_address": { 00:19:27.525 "trtype": "TCP", 00:19:27.525 "adrfam": "IPv4", 00:19:27.525 "traddr": "10.0.0.1", 00:19:27.525 "trsvcid": "34294" 00:19:27.525 }, 00:19:27.525 "auth": { 00:19:27.525 "state": "completed", 00:19:27.525 "digest": "sha256", 00:19:27.525 "dhgroup": "ffdhe3072" 00:19:27.525 } 00:19:27.525 } 00:19:27.525 ]' 00:19:27.525 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.783 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.043 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:28.043 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.666 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.491 00:19:29.491 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.491 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.491 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.748 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.748 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.748 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.748 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.748 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.749 { 00:19:29.749 "cntlid": 25, 00:19:29.749 "qid": 0, 00:19:29.749 "state": "enabled", 00:19:29.749 "thread": "nvmf_tgt_poll_group_000", 00:19:29.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:29.749 "listen_address": { 00:19:29.749 "trtype": "TCP", 00:19:29.749 "adrfam": "IPv4", 00:19:29.749 "traddr": "10.0.0.2", 00:19:29.749 "trsvcid": "4420" 00:19:29.749 }, 00:19:29.749 "peer_address": { 00:19:29.749 "trtype": "TCP", 00:19:29.749 "adrfam": "IPv4", 00:19:29.749 "traddr": "10.0.0.1", 00:19:29.749 "trsvcid": "34320" 00:19:29.749 }, 00:19:29.749 "auth": { 00:19:29.749 "state": "completed", 00:19:29.749 "digest": "sha256", 00:19:29.749 "dhgroup": "ffdhe4096" 00:19:29.749 } 00:19:29.749 } 00:19:29.749 ]' 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.749 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.006 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:30.006 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.941 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.199 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.457 00:19:31.457 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.457 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.457 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.716 { 00:19:31.716 "cntlid": 27, 00:19:31.716 "qid": 0, 00:19:31.716 "state": "enabled", 00:19:31.716 "thread": "nvmf_tgt_poll_group_000", 00:19:31.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:31.716 "listen_address": { 00:19:31.716 "trtype": "TCP", 00:19:31.716 "adrfam": "IPv4", 00:19:31.716 "traddr": "10.0.0.2", 00:19:31.716 "trsvcid": "4420" 00:19:31.716 }, 00:19:31.716 "peer_address": { 00:19:31.716 "trtype": "TCP", 00:19:31.716 "adrfam": "IPv4", 00:19:31.716 "traddr": "10.0.0.1", 00:19:31.716 "trsvcid": "36056" 00:19:31.716 }, 00:19:31.716 "auth": { 00:19:31.716 "state": "completed", 00:19:31.716 "digest": "sha256", 00:19:31.716 "dhgroup": "ffdhe4096" 00:19:31.716 } 00:19:31.716 } 00:19:31.716 ]' 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.716 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.974 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.974 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.974 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.974 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.974 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.232 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:32.232 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:32.798 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.057 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.315 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.574 00:19:33.574 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.574 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.574 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.831 { 00:19:33.831 "cntlid": 29, 00:19:33.831 "qid": 0, 00:19:33.831 "state": "enabled", 00:19:33.831 "thread": "nvmf_tgt_poll_group_000", 00:19:33.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:33.831 "listen_address": { 00:19:33.831 "trtype": "TCP", 00:19:33.831 "adrfam": "IPv4", 00:19:33.831 "traddr": "10.0.0.2", 00:19:33.831 "trsvcid": "4420" 00:19:33.831 }, 00:19:33.831 "peer_address": { 00:19:33.831 "trtype": "TCP", 00:19:33.831 "adrfam": "IPv4", 00:19:33.831 "traddr": "10.0.0.1", 00:19:33.831 "trsvcid": "36094" 00:19:33.831 }, 00:19:33.831 "auth": { 00:19:33.831 "state": "completed", 00:19:33.831 "digest": "sha256", 00:19:33.831 "dhgroup": "ffdhe4096" 00:19:33.831 } 00:19:33.831 } 00:19:33.831 ]' 00:19:33.831 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.089 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.346 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:34.347 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.281 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.539 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.797 00:19:35.797 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.797 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.797 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.055 { 00:19:36.055 "cntlid": 31, 00:19:36.055 "qid": 0, 00:19:36.055 "state": "enabled", 00:19:36.055 "thread": "nvmf_tgt_poll_group_000", 00:19:36.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:36.055 "listen_address": { 00:19:36.055 "trtype": "TCP", 00:19:36.055 "adrfam": "IPv4", 00:19:36.055 "traddr": "10.0.0.2", 00:19:36.055 "trsvcid": "4420" 00:19:36.055 }, 00:19:36.055 "peer_address": { 00:19:36.055 "trtype": "TCP", 00:19:36.055 "adrfam": "IPv4", 00:19:36.055 "traddr": "10.0.0.1", 00:19:36.055 "trsvcid": "36122" 00:19:36.055 }, 00:19:36.055 "auth": { 00:19:36.055 "state": "completed", 00:19:36.055 "digest": "sha256", 00:19:36.055 "dhgroup": "ffdhe4096" 00:19:36.055 } 00:19:36.055 } 00:19:36.055 ]' 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.055 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.313 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.313 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.313 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.313 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.313 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.572 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:36.572 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.138 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.396 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.963 00:19:37.963 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.963 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.963 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.222 { 00:19:38.222 "cntlid": 33, 00:19:38.222 "qid": 0, 00:19:38.222 "state": "enabled", 00:19:38.222 "thread": "nvmf_tgt_poll_group_000", 00:19:38.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:38.222 "listen_address": { 00:19:38.222 "trtype": "TCP", 00:19:38.222 "adrfam": "IPv4", 00:19:38.222 "traddr": "10.0.0.2", 00:19:38.222 "trsvcid": "4420" 00:19:38.222 }, 00:19:38.222 "peer_address": { 00:19:38.222 "trtype": "TCP", 00:19:38.222 "adrfam": "IPv4", 00:19:38.222 "traddr": "10.0.0.1", 00:19:38.222 "trsvcid": "36148" 00:19:38.222 }, 00:19:38.222 "auth": { 00:19:38.222 "state": "completed", 00:19:38.222 "digest": "sha256", 00:19:38.222 "dhgroup": "ffdhe6144" 00:19:38.222 } 00:19:38.222 } 00:19:38.222 ]' 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.222 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.223 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.482 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.482 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.482 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.482 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.739 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:38.739 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:39.304 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.562 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.820 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.078 00:19:40.078 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.078 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.078 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.645 { 00:19:40.645 "cntlid": 35, 00:19:40.645 "qid": 0, 00:19:40.645 "state": "enabled", 00:19:40.645 "thread": "nvmf_tgt_poll_group_000", 00:19:40.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:40.645 "listen_address": { 00:19:40.645 "trtype": "TCP", 00:19:40.645 "adrfam": "IPv4", 00:19:40.645 "traddr": "10.0.0.2", 00:19:40.645 "trsvcid": "4420" 00:19:40.645 }, 00:19:40.645 "peer_address": { 00:19:40.645 "trtype": "TCP", 00:19:40.645 "adrfam": "IPv4", 00:19:40.645 "traddr": "10.0.0.1", 00:19:40.645 "trsvcid": "36176" 00:19:40.645 }, 00:19:40.645 "auth": { 00:19:40.645 "state": "completed", 00:19:40.645 "digest": "sha256", 00:19:40.645 "dhgroup": "ffdhe6144" 00:19:40.645 } 00:19:40.645 } 00:19:40.645 ]' 00:19:40.645 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.645 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.903 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:40.903 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:41.837 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.837 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:41.837 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.837 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.838 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.404 00:19:42.404 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.404 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.404 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.663 { 00:19:42.663 "cntlid": 37, 00:19:42.663 "qid": 0, 00:19:42.663 "state": "enabled", 00:19:42.663 "thread": "nvmf_tgt_poll_group_000", 00:19:42.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:42.663 "listen_address": { 00:19:42.663 "trtype": "TCP", 00:19:42.663 "adrfam": "IPv4", 00:19:42.663 "traddr": "10.0.0.2", 00:19:42.663 "trsvcid": "4420" 00:19:42.663 }, 00:19:42.663 "peer_address": { 00:19:42.663 "trtype": "TCP", 00:19:42.663 "adrfam": "IPv4", 00:19:42.663 "traddr": "10.0.0.1", 00:19:42.663 "trsvcid": "46516" 00:19:42.663 }, 00:19:42.663 "auth": { 00:19:42.663 "state": "completed", 00:19:42.663 "digest": "sha256", 00:19:42.663 "dhgroup": "ffdhe6144" 00:19:42.663 } 00:19:42.663 } 00:19:42.663 ]' 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.663 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.921 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:42.921 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.856 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.114 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.372 00:19:44.630 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.630 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.630 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.889 { 00:19:44.889 "cntlid": 39, 00:19:44.889 "qid": 0, 00:19:44.889 "state": "enabled", 00:19:44.889 "thread": "nvmf_tgt_poll_group_000", 00:19:44.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:44.889 "listen_address": { 00:19:44.889 "trtype": "TCP", 00:19:44.889 "adrfam": "IPv4", 00:19:44.889 "traddr": "10.0.0.2", 00:19:44.889 "trsvcid": "4420" 00:19:44.889 }, 00:19:44.889 "peer_address": { 00:19:44.889 "trtype": "TCP", 00:19:44.889 "adrfam": "IPv4", 00:19:44.889 "traddr": "10.0.0.1", 00:19:44.889 "trsvcid": "46546" 00:19:44.889 }, 00:19:44.889 "auth": { 00:19:44.889 "state": "completed", 00:19:44.889 "digest": "sha256", 00:19:44.889 "dhgroup": "ffdhe6144" 00:19:44.889 } 00:19:44.889 } 00:19:44.889 ]' 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.889 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.148 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:45.148 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.082 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.649 00:19:46.649 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.649 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.649 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.907 { 00:19:46.907 "cntlid": 41, 00:19:46.907 "qid": 0, 00:19:46.907 "state": "enabled", 00:19:46.907 "thread": "nvmf_tgt_poll_group_000", 00:19:46.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:46.907 "listen_address": { 00:19:46.907 "trtype": "TCP", 00:19:46.907 "adrfam": "IPv4", 00:19:46.907 "traddr": "10.0.0.2", 00:19:46.907 "trsvcid": "4420" 00:19:46.907 }, 00:19:46.907 "peer_address": { 00:19:46.907 "trtype": "TCP", 00:19:46.907 "adrfam": "IPv4", 00:19:46.907 "traddr": "10.0.0.1", 00:19:46.907 "trsvcid": "46572" 00:19:46.907 }, 00:19:46.907 "auth": { 00:19:46.907 "state": "completed", 00:19:46.907 "digest": "sha256", 00:19:46.907 "dhgroup": "ffdhe8192" 00:19:46.907 } 00:19:46.907 } 00:19:46.907 ]' 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.907 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.908 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.908 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.908 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:47.166 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.358 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.358 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.358 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.358 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.924 00:19:48.925 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.925 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.925 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.183 { 00:19:49.183 "cntlid": 43, 00:19:49.183 "qid": 0, 00:19:49.183 "state": "enabled", 00:19:49.183 "thread": "nvmf_tgt_poll_group_000", 00:19:49.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:49.183 "listen_address": { 00:19:49.183 "trtype": "TCP", 00:19:49.183 "adrfam": "IPv4", 00:19:49.183 "traddr": "10.0.0.2", 00:19:49.183 "trsvcid": "4420" 00:19:49.183 }, 00:19:49.183 "peer_address": { 00:19:49.183 "trtype": "TCP", 00:19:49.183 "adrfam": "IPv4", 00:19:49.183 "traddr": "10.0.0.1", 00:19:49.183 "trsvcid": "46604" 00:19:49.183 }, 00:19:49.183 "auth": { 00:19:49.183 "state": "completed", 00:19:49.183 "digest": "sha256", 00:19:49.183 "dhgroup": "ffdhe8192" 00:19:49.183 } 00:19:49.183 } 00:19:49.183 ]' 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.183 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.184 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.442 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:49.442 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.376 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.635 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.201 00:19:51.201 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.201 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.201 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.459 { 00:19:51.459 "cntlid": 45, 00:19:51.459 "qid": 0, 00:19:51.459 "state": "enabled", 00:19:51.459 "thread": "nvmf_tgt_poll_group_000", 00:19:51.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:51.459 "listen_address": { 00:19:51.459 "trtype": "TCP", 00:19:51.459 "adrfam": "IPv4", 00:19:51.459 "traddr": "10.0.0.2", 00:19:51.459 "trsvcid": "4420" 00:19:51.459 }, 00:19:51.459 "peer_address": { 00:19:51.459 "trtype": "TCP", 00:19:51.459 "adrfam": "IPv4", 00:19:51.459 "traddr": "10.0.0.1", 00:19:51.459 "trsvcid": "40284" 00:19:51.459 }, 00:19:51.459 "auth": { 00:19:51.459 "state": "completed", 00:19:51.459 "digest": "sha256", 00:19:51.459 "dhgroup": "ffdhe8192" 00:19:51.459 } 00:19:51.459 } 00:19:51.459 ]' 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.459 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.459 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.459 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.717 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.717 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.717 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.975 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:51.975 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.910 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.845 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.845 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.846 { 00:19:53.846 "cntlid": 47, 00:19:53.846 "qid": 0, 00:19:53.846 "state": "enabled", 00:19:53.846 "thread": "nvmf_tgt_poll_group_000", 00:19:53.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:53.846 "listen_address": { 00:19:53.846 "trtype": "TCP", 00:19:53.846 "adrfam": "IPv4", 00:19:53.846 "traddr": "10.0.0.2", 00:19:53.846 "trsvcid": "4420" 00:19:53.846 }, 00:19:53.846 "peer_address": { 00:19:53.846 "trtype": "TCP", 00:19:53.846 "adrfam": "IPv4", 00:19:53.846 "traddr": "10.0.0.1", 00:19:53.846 "trsvcid": "40314" 00:19:53.846 }, 00:19:53.846 "auth": { 00:19:53.846 "state": "completed", 00:19:53.846 "digest": "sha256", 00:19:53.846 "dhgroup": "ffdhe8192" 00:19:53.846 } 00:19:53.846 } 00:19:53.846 ]' 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.846 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.103 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.103 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.104 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.104 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.104 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.362 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:54.362 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.300 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.558 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.817 00:19:55.817 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.817 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.817 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.075 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.075 { 00:19:56.075 "cntlid": 49, 00:19:56.075 "qid": 0, 00:19:56.075 "state": "enabled", 00:19:56.075 "thread": "nvmf_tgt_poll_group_000", 00:19:56.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:56.076 "listen_address": { 00:19:56.076 "trtype": "TCP", 00:19:56.076 "adrfam": "IPv4", 00:19:56.076 "traddr": "10.0.0.2", 00:19:56.076 "trsvcid": "4420" 00:19:56.076 }, 00:19:56.076 "peer_address": { 00:19:56.076 "trtype": "TCP", 00:19:56.076 "adrfam": "IPv4", 00:19:56.076 "traddr": "10.0.0.1", 00:19:56.076 "trsvcid": "40342" 00:19:56.076 }, 00:19:56.076 "auth": { 00:19:56.076 "state": "completed", 00:19:56.076 "digest": "sha384", 00:19:56.076 "dhgroup": "null" 00:19:56.076 } 00:19:56.076 } 00:19:56.076 ]' 00:19:56.076 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.076 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.076 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.076 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.076 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.334 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.334 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.334 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.592 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:56.592 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:19:57.527 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.527 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:57.527 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.528 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.528 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.528 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.528 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.528 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.528 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.094 00:19:58.094 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.094 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.094 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.352 { 00:19:58.352 "cntlid": 51, 00:19:58.352 "qid": 0, 00:19:58.352 "state": "enabled", 00:19:58.352 "thread": "nvmf_tgt_poll_group_000", 00:19:58.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:58.352 "listen_address": { 00:19:58.352 "trtype": "TCP", 00:19:58.352 "adrfam": "IPv4", 00:19:58.352 "traddr": "10.0.0.2", 00:19:58.352 "trsvcid": "4420" 00:19:58.352 }, 00:19:58.352 "peer_address": { 00:19:58.352 "trtype": "TCP", 00:19:58.352 "adrfam": "IPv4", 00:19:58.352 "traddr": "10.0.0.1", 00:19:58.352 "trsvcid": "40370" 00:19:58.352 }, 00:19:58.352 "auth": { 00:19:58.352 "state": "completed", 00:19:58.352 "digest": "sha384", 00:19:58.352 "dhgroup": "null" 00:19:58.352 } 00:19:58.352 } 00:19:58.352 ]' 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.352 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.611 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:58.611 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.545 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.804 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.062 00:20:00.062 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.062 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.062 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.321 { 00:20:00.321 "cntlid": 53, 00:20:00.321 "qid": 0, 00:20:00.321 "state": "enabled", 00:20:00.321 "thread": "nvmf_tgt_poll_group_000", 00:20:00.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:00.321 "listen_address": { 00:20:00.321 "trtype": "TCP", 00:20:00.321 "adrfam": "IPv4", 00:20:00.321 "traddr": "10.0.0.2", 00:20:00.321 "trsvcid": "4420" 00:20:00.321 }, 00:20:00.321 "peer_address": { 00:20:00.321 "trtype": "TCP", 00:20:00.321 "adrfam": "IPv4", 00:20:00.321 "traddr": "10.0.0.1", 00:20:00.321 "trsvcid": "40404" 00:20:00.321 }, 00:20:00.321 "auth": { 00:20:00.321 "state": "completed", 00:20:00.321 "digest": "sha384", 00:20:00.321 "dhgroup": "null" 00:20:00.321 } 00:20:00.321 } 00:20:00.321 ]' 00:20:00.321 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.579 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.579 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.579 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:00.579 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.579 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.579 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.579 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.837 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:00.837 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:01.771 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.771 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.772 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.337 00:20:02.337 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.337 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.337 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.595 { 00:20:02.595 "cntlid": 55, 00:20:02.595 "qid": 0, 00:20:02.595 "state": "enabled", 00:20:02.595 "thread": "nvmf_tgt_poll_group_000", 00:20:02.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:02.595 "listen_address": { 00:20:02.595 "trtype": "TCP", 00:20:02.595 "adrfam": "IPv4", 00:20:02.595 "traddr": "10.0.0.2", 00:20:02.595 "trsvcid": "4420" 00:20:02.595 }, 00:20:02.595 "peer_address": { 00:20:02.595 "trtype": "TCP", 00:20:02.595 "adrfam": "IPv4", 00:20:02.595 "traddr": "10.0.0.1", 00:20:02.595 "trsvcid": "58536" 00:20:02.595 }, 00:20:02.595 "auth": { 00:20:02.595 "state": "completed", 00:20:02.595 "digest": "sha384", 00:20:02.595 "dhgroup": "null" 00:20:02.595 } 00:20:02.595 } 00:20:02.595 ]' 00:20:02.595 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.595 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.853 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:02.853 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.787 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.046 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.304 00:20:04.304 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.304 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.304 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.562 { 00:20:04.562 "cntlid": 57, 00:20:04.562 "qid": 0, 00:20:04.562 "state": "enabled", 00:20:04.562 "thread": "nvmf_tgt_poll_group_000", 00:20:04.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:04.562 "listen_address": { 00:20:04.562 "trtype": "TCP", 00:20:04.562 "adrfam": "IPv4", 00:20:04.562 "traddr": "10.0.0.2", 00:20:04.562 "trsvcid": "4420" 00:20:04.562 }, 00:20:04.562 "peer_address": { 00:20:04.562 "trtype": "TCP", 00:20:04.562 "adrfam": "IPv4", 00:20:04.562 "traddr": "10.0.0.1", 00:20:04.562 "trsvcid": "58550" 00:20:04.562 }, 00:20:04.562 "auth": { 00:20:04.562 "state": "completed", 00:20:04.562 "digest": "sha384", 00:20:04.562 "dhgroup": "ffdhe2048" 00:20:04.562 } 00:20:04.562 } 00:20:04.562 ]' 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.562 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.820 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.820 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.820 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.820 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.820 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.079 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:05.079 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.013 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.580 00:20:06.580 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.580 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.580 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.838 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.838 { 00:20:06.838 "cntlid": 59, 00:20:06.838 "qid": 0, 00:20:06.839 "state": "enabled", 00:20:06.839 "thread": "nvmf_tgt_poll_group_000", 00:20:06.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:06.839 "listen_address": { 00:20:06.839 "trtype": "TCP", 00:20:06.839 "adrfam": "IPv4", 00:20:06.839 "traddr": "10.0.0.2", 00:20:06.839 "trsvcid": "4420" 00:20:06.839 }, 00:20:06.839 "peer_address": { 00:20:06.839 "trtype": "TCP", 00:20:06.839 "adrfam": "IPv4", 00:20:06.839 "traddr": "10.0.0.1", 00:20:06.839 "trsvcid": "58582" 00:20:06.839 }, 00:20:06.839 "auth": { 00:20:06.839 "state": "completed", 00:20:06.839 "digest": "sha384", 00:20:06.839 "dhgroup": "ffdhe2048" 00:20:06.839 } 00:20:06.839 } 00:20:06.839 ]' 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.839 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.097 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:07.097 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.113 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.419 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.696 00:20:08.696 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.696 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.696 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.955 { 00:20:08.955 "cntlid": 61, 00:20:08.955 "qid": 0, 00:20:08.955 "state": "enabled", 00:20:08.955 "thread": "nvmf_tgt_poll_group_000", 00:20:08.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:08.955 "listen_address": { 00:20:08.955 "trtype": "TCP", 00:20:08.955 "adrfam": "IPv4", 00:20:08.955 "traddr": "10.0.0.2", 00:20:08.955 "trsvcid": "4420" 00:20:08.955 }, 00:20:08.955 "peer_address": { 00:20:08.955 "trtype": "TCP", 00:20:08.955 "adrfam": "IPv4", 00:20:08.955 "traddr": "10.0.0.1", 00:20:08.955 "trsvcid": "58610" 00:20:08.955 }, 00:20:08.955 "auth": { 00:20:08.955 "state": "completed", 00:20:08.955 "digest": "sha384", 00:20:08.955 "dhgroup": "ffdhe2048" 00:20:08.955 } 00:20:08.955 } 00:20:08.955 ]' 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.955 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.214 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:09.214 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.151 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.410 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.411 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.669 00:20:10.669 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.669 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.669 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.929 { 00:20:10.929 "cntlid": 63, 00:20:10.929 "qid": 0, 00:20:10.929 "state": "enabled", 00:20:10.929 "thread": "nvmf_tgt_poll_group_000", 00:20:10.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:10.929 "listen_address": { 00:20:10.929 "trtype": "TCP", 00:20:10.929 "adrfam": "IPv4", 00:20:10.929 "traddr": "10.0.0.2", 00:20:10.929 "trsvcid": "4420" 00:20:10.929 }, 00:20:10.929 "peer_address": { 00:20:10.929 "trtype": "TCP", 00:20:10.929 "adrfam": "IPv4", 00:20:10.929 "traddr": "10.0.0.1", 00:20:10.929 "trsvcid": "58636" 00:20:10.929 }, 00:20:10.929 "auth": { 00:20:10.929 "state": "completed", 00:20:10.929 "digest": "sha384", 00:20:10.929 "dhgroup": "ffdhe2048" 00:20:10.929 } 00:20:10.929 } 00:20:10.929 ]' 00:20:10.929 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.187 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.446 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:11.446 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.383 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.384 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.951 00:20:12.951 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.951 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.951 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.210 { 00:20:13.210 "cntlid": 65, 00:20:13.210 "qid": 0, 00:20:13.210 "state": "enabled", 00:20:13.210 "thread": "nvmf_tgt_poll_group_000", 00:20:13.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:13.210 "listen_address": { 00:20:13.210 "trtype": "TCP", 00:20:13.210 "adrfam": "IPv4", 00:20:13.210 "traddr": "10.0.0.2", 00:20:13.210 "trsvcid": "4420" 00:20:13.210 }, 00:20:13.210 "peer_address": { 00:20:13.210 "trtype": "TCP", 00:20:13.210 "adrfam": "IPv4", 00:20:13.210 "traddr": "10.0.0.1", 00:20:13.210 "trsvcid": "49326" 00:20:13.210 }, 00:20:13.210 "auth": { 00:20:13.210 "state": "completed", 00:20:13.210 "digest": "sha384", 00:20:13.210 "dhgroup": "ffdhe3072" 00:20:13.210 } 00:20:13.210 } 00:20:13.210 ]' 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.210 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.469 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:13.469 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.406 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.665 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.924 00:20:14.924 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.924 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.924 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.183 { 00:20:15.183 "cntlid": 67, 00:20:15.183 "qid": 0, 00:20:15.183 "state": "enabled", 00:20:15.183 "thread": "nvmf_tgt_poll_group_000", 00:20:15.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:15.183 "listen_address": { 00:20:15.183 "trtype": "TCP", 00:20:15.183 "adrfam": "IPv4", 00:20:15.183 "traddr": "10.0.0.2", 00:20:15.183 "trsvcid": "4420" 00:20:15.183 }, 00:20:15.183 "peer_address": { 00:20:15.183 "trtype": "TCP", 00:20:15.183 "adrfam": "IPv4", 00:20:15.183 "traddr": "10.0.0.1", 00:20:15.183 "trsvcid": "49356" 00:20:15.183 }, 00:20:15.183 "auth": { 00:20:15.183 "state": "completed", 00:20:15.183 "digest": "sha384", 00:20:15.183 "dhgroup": "ffdhe3072" 00:20:15.183 } 00:20:15.183 } 00:20:15.183 ]' 00:20:15.183 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.441 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.700 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:15.700 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.636 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.895 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.154 00:20:17.154 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.154 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.154 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.413 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.413 { 00:20:17.413 "cntlid": 69, 00:20:17.413 "qid": 0, 00:20:17.413 "state": "enabled", 00:20:17.413 "thread": "nvmf_tgt_poll_group_000", 00:20:17.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:17.414 "listen_address": { 00:20:17.414 "trtype": "TCP", 00:20:17.414 "adrfam": "IPv4", 00:20:17.414 "traddr": "10.0.0.2", 00:20:17.414 "trsvcid": "4420" 00:20:17.414 }, 00:20:17.414 "peer_address": { 00:20:17.414 "trtype": "TCP", 00:20:17.414 "adrfam": "IPv4", 00:20:17.414 "traddr": "10.0.0.1", 00:20:17.414 "trsvcid": "49372" 00:20:17.414 }, 00:20:17.414 "auth": { 00:20:17.414 "state": "completed", 00:20:17.414 "digest": "sha384", 00:20:17.414 "dhgroup": "ffdhe3072" 00:20:17.414 } 00:20:17.414 } 00:20:17.414 ]' 00:20:17.414 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.414 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.414 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.672 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.672 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.672 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.673 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.673 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.931 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:17.931 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:18.868 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.868 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.869 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.128 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.386 00:20:19.387 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.387 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.387 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.646 { 00:20:19.646 "cntlid": 71, 00:20:19.646 "qid": 0, 00:20:19.646 "state": "enabled", 00:20:19.646 "thread": "nvmf_tgt_poll_group_000", 00:20:19.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:19.646 "listen_address": { 00:20:19.646 "trtype": "TCP", 00:20:19.646 "adrfam": "IPv4", 00:20:19.646 "traddr": "10.0.0.2", 00:20:19.646 "trsvcid": "4420" 00:20:19.646 }, 00:20:19.646 "peer_address": { 00:20:19.646 "trtype": "TCP", 00:20:19.646 "adrfam": "IPv4", 00:20:19.646 "traddr": "10.0.0.1", 00:20:19.646 "trsvcid": "49384" 00:20:19.646 }, 00:20:19.646 "auth": { 00:20:19.646 "state": "completed", 00:20:19.646 "digest": "sha384", 00:20:19.646 "dhgroup": "ffdhe3072" 00:20:19.646 } 00:20:19.646 } 00:20:19.646 ]' 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.646 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.905 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.905 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.905 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.164 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:20.164 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.101 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.669 00:20:21.669 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.669 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.669 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.927 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.927 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.927 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.927 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.928 { 00:20:21.928 "cntlid": 73, 00:20:21.928 "qid": 0, 00:20:21.928 "state": "enabled", 00:20:21.928 "thread": "nvmf_tgt_poll_group_000", 00:20:21.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:21.928 "listen_address": { 00:20:21.928 "trtype": "TCP", 00:20:21.928 "adrfam": "IPv4", 00:20:21.928 "traddr": "10.0.0.2", 00:20:21.928 "trsvcid": "4420" 00:20:21.928 }, 00:20:21.928 "peer_address": { 00:20:21.928 "trtype": "TCP", 00:20:21.928 "adrfam": "IPv4", 00:20:21.928 "traddr": "10.0.0.1", 00:20:21.928 "trsvcid": "57450" 00:20:21.928 }, 00:20:21.928 "auth": { 00:20:21.928 "state": "completed", 00:20:21.928 "digest": "sha384", 00:20:21.928 "dhgroup": "ffdhe4096" 00:20:21.928 } 00:20:21.928 } 00:20:21.928 ]' 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.928 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.186 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:22.186 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.122 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.381 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.382 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.382 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.382 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.640 00:20:23.640 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.640 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.641 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.900 { 00:20:23.900 "cntlid": 75, 00:20:23.900 "qid": 0, 00:20:23.900 "state": "enabled", 00:20:23.900 "thread": "nvmf_tgt_poll_group_000", 00:20:23.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:23.900 "listen_address": { 00:20:23.900 "trtype": "TCP", 00:20:23.900 "adrfam": "IPv4", 00:20:23.900 "traddr": "10.0.0.2", 00:20:23.900 "trsvcid": "4420" 00:20:23.900 }, 00:20:23.900 "peer_address": { 00:20:23.900 "trtype": "TCP", 00:20:23.900 "adrfam": "IPv4", 00:20:23.900 "traddr": "10.0.0.1", 00:20:23.900 "trsvcid": "57484" 00:20:23.900 }, 00:20:23.900 "auth": { 00:20:23.900 "state": "completed", 00:20:23.900 "digest": "sha384", 00:20:23.900 "dhgroup": "ffdhe4096" 00:20:23.900 } 00:20:23.900 } 00:20:23.900 ]' 00:20:23.900 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.159 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.418 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:24.418 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.353 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.354 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.612 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.612 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.612 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.872 00:20:25.872 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.872 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.872 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.131 { 00:20:26.131 "cntlid": 77, 00:20:26.131 "qid": 0, 00:20:26.131 "state": "enabled", 00:20:26.131 "thread": "nvmf_tgt_poll_group_000", 00:20:26.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:26.131 "listen_address": { 00:20:26.131 "trtype": "TCP", 00:20:26.131 "adrfam": "IPv4", 00:20:26.131 "traddr": "10.0.0.2", 00:20:26.131 "trsvcid": "4420" 00:20:26.131 }, 00:20:26.131 "peer_address": { 00:20:26.131 "trtype": "TCP", 00:20:26.131 "adrfam": "IPv4", 00:20:26.131 "traddr": "10.0.0.1", 00:20:26.131 "trsvcid": "57512" 00:20:26.131 }, 00:20:26.131 "auth": { 00:20:26.131 "state": "completed", 00:20:26.131 "digest": "sha384", 00:20:26.131 "dhgroup": "ffdhe4096" 00:20:26.131 } 00:20:26.131 } 00:20:26.131 ]' 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.131 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.390 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.390 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.390 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.649 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:26.649 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.217 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.476 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.477 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.045 00:20:28.045 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.045 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.045 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.303 { 00:20:28.303 "cntlid": 79, 00:20:28.303 "qid": 0, 00:20:28.303 "state": "enabled", 00:20:28.303 "thread": "nvmf_tgt_poll_group_000", 00:20:28.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:28.303 "listen_address": { 00:20:28.303 "trtype": "TCP", 00:20:28.303 "adrfam": "IPv4", 00:20:28.303 "traddr": "10.0.0.2", 00:20:28.303 "trsvcid": "4420" 00:20:28.303 }, 00:20:28.303 "peer_address": { 00:20:28.303 "trtype": "TCP", 00:20:28.303 "adrfam": "IPv4", 00:20:28.303 "traddr": "10.0.0.1", 00:20:28.303 "trsvcid": "57550" 00:20:28.303 }, 00:20:28.303 "auth": { 00:20:28.303 "state": "completed", 00:20:28.303 "digest": "sha384", 00:20:28.303 "dhgroup": "ffdhe4096" 00:20:28.303 } 00:20:28.303 } 00:20:28.303 ]' 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.303 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.562 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:28.562 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.756 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.015 00:20:30.015 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.015 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.015 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.274 { 00:20:30.274 "cntlid": 81, 00:20:30.274 "qid": 0, 00:20:30.274 "state": "enabled", 00:20:30.274 "thread": "nvmf_tgt_poll_group_000", 00:20:30.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:30.274 "listen_address": { 00:20:30.274 "trtype": "TCP", 00:20:30.274 "adrfam": "IPv4", 00:20:30.274 "traddr": "10.0.0.2", 00:20:30.274 "trsvcid": "4420" 00:20:30.274 }, 00:20:30.274 "peer_address": { 00:20:30.274 "trtype": "TCP", 00:20:30.274 "adrfam": "IPv4", 00:20:30.274 "traddr": "10.0.0.1", 00:20:30.274 "trsvcid": "57582" 00:20:30.274 }, 00:20:30.274 "auth": { 00:20:30.274 "state": "completed", 00:20:30.274 "digest": "sha384", 00:20:30.274 "dhgroup": "ffdhe6144" 00:20:30.274 } 00:20:30.274 } 00:20:30.274 ]' 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.274 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.533 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.533 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.533 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.533 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.533 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.792 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:30.792 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.359 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.617 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.184 00:20:32.184 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.184 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.184 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.443 { 00:20:32.443 "cntlid": 83, 00:20:32.443 "qid": 0, 00:20:32.443 "state": "enabled", 00:20:32.443 "thread": "nvmf_tgt_poll_group_000", 00:20:32.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:32.443 "listen_address": { 00:20:32.443 "trtype": "TCP", 00:20:32.443 "adrfam": "IPv4", 00:20:32.443 "traddr": "10.0.0.2", 00:20:32.443 "trsvcid": "4420" 00:20:32.443 }, 00:20:32.443 "peer_address": { 00:20:32.443 "trtype": "TCP", 00:20:32.443 "adrfam": "IPv4", 00:20:32.443 "traddr": "10.0.0.1", 00:20:32.443 "trsvcid": "44640" 00:20:32.443 }, 00:20:32.443 "auth": { 00:20:32.443 "state": "completed", 00:20:32.443 "digest": "sha384", 00:20:32.443 "dhgroup": "ffdhe6144" 00:20:32.443 } 00:20:32.443 } 00:20:32.443 ]' 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.443 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.443 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.443 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.702 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.702 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.702 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.960 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:32.960 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.528 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.787 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.355 00:20:34.355 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.355 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.355 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.614 { 00:20:34.614 "cntlid": 85, 00:20:34.614 "qid": 0, 00:20:34.614 "state": "enabled", 00:20:34.614 "thread": "nvmf_tgt_poll_group_000", 00:20:34.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:34.614 "listen_address": { 00:20:34.614 "trtype": "TCP", 00:20:34.614 "adrfam": "IPv4", 00:20:34.614 "traddr": "10.0.0.2", 00:20:34.614 "trsvcid": "4420" 00:20:34.614 }, 00:20:34.614 "peer_address": { 00:20:34.614 "trtype": "TCP", 00:20:34.614 "adrfam": "IPv4", 00:20:34.614 "traddr": "10.0.0.1", 00:20:34.614 "trsvcid": "44664" 00:20:34.614 }, 00:20:34.614 "auth": { 00:20:34.614 "state": "completed", 00:20:34.614 "digest": "sha384", 00:20:34.614 "dhgroup": "ffdhe6144" 00:20:34.614 } 00:20:34.614 } 00:20:34.614 ]' 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.614 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.873 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.873 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.873 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:35.132 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.700 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.267 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.268 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.527 00:20:36.527 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.527 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.527 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.787 { 00:20:36.787 "cntlid": 87, 00:20:36.787 "qid": 0, 00:20:36.787 "state": "enabled", 00:20:36.787 "thread": "nvmf_tgt_poll_group_000", 00:20:36.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:36.787 "listen_address": { 00:20:36.787 "trtype": "TCP", 00:20:36.787 "adrfam": "IPv4", 00:20:36.787 "traddr": "10.0.0.2", 00:20:36.787 "trsvcid": "4420" 00:20:36.787 }, 00:20:36.787 "peer_address": { 00:20:36.787 "trtype": "TCP", 00:20:36.787 "adrfam": "IPv4", 00:20:36.787 "traddr": "10.0.0.1", 00:20:36.787 "trsvcid": "44686" 00:20:36.787 }, 00:20:36.787 "auth": { 00:20:36.787 "state": "completed", 00:20:36.787 "digest": "sha384", 00:20:36.787 "dhgroup": "ffdhe6144" 00:20:36.787 } 00:20:36.787 } 00:20:36.787 ]' 00:20:36.787 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.046 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.305 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:37.305 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.240 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.498 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.066 00:20:39.066 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.066 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.066 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.325 { 00:20:39.325 "cntlid": 89, 00:20:39.325 "qid": 0, 00:20:39.325 "state": "enabled", 00:20:39.325 "thread": "nvmf_tgt_poll_group_000", 00:20:39.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:39.325 "listen_address": { 00:20:39.325 "trtype": "TCP", 00:20:39.325 "adrfam": "IPv4", 00:20:39.325 "traddr": "10.0.0.2", 00:20:39.325 "trsvcid": "4420" 00:20:39.325 }, 00:20:39.325 "peer_address": { 00:20:39.325 "trtype": "TCP", 00:20:39.325 "adrfam": "IPv4", 00:20:39.325 "traddr": "10.0.0.1", 00:20:39.325 "trsvcid": "44718" 00:20:39.325 }, 00:20:39.325 "auth": { 00:20:39.325 "state": "completed", 00:20:39.325 "digest": "sha384", 00:20:39.325 "dhgroup": "ffdhe8192" 00:20:39.325 } 00:20:39.325 } 00:20:39.325 ]' 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.325 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.584 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.584 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.584 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.584 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:39.584 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:40.520 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.520 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:40.520 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.520 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.520 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.520 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.520 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.520 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.779 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.780 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.780 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.408 00:20:41.409 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.409 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.409 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.667 { 00:20:41.667 "cntlid": 91, 00:20:41.667 "qid": 0, 00:20:41.667 "state": "enabled", 00:20:41.667 "thread": "nvmf_tgt_poll_group_000", 00:20:41.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:41.667 "listen_address": { 00:20:41.667 "trtype": "TCP", 00:20:41.667 "adrfam": "IPv4", 00:20:41.667 "traddr": "10.0.0.2", 00:20:41.667 "trsvcid": "4420" 00:20:41.667 }, 00:20:41.667 "peer_address": { 00:20:41.667 "trtype": "TCP", 00:20:41.667 "adrfam": "IPv4", 00:20:41.667 "traddr": "10.0.0.1", 00:20:41.667 "trsvcid": "59088" 00:20:41.667 }, 00:20:41.667 "auth": { 00:20:41.667 "state": "completed", 00:20:41.667 "digest": "sha384", 00:20:41.667 "dhgroup": "ffdhe8192" 00:20:41.667 } 00:20:41.667 } 00:20:41.667 ]' 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.667 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.925 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.925 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.925 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.925 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.925 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.184 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:42.184 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.119 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.378 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.378 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.378 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.378 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.945 00:20:43.945 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.945 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.945 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.203 { 00:20:44.203 "cntlid": 93, 00:20:44.203 "qid": 0, 00:20:44.203 "state": "enabled", 00:20:44.203 "thread": "nvmf_tgt_poll_group_000", 00:20:44.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:44.203 "listen_address": { 00:20:44.203 "trtype": "TCP", 00:20:44.203 "adrfam": "IPv4", 00:20:44.203 "traddr": "10.0.0.2", 00:20:44.203 "trsvcid": "4420" 00:20:44.203 }, 00:20:44.203 "peer_address": { 00:20:44.203 "trtype": "TCP", 00:20:44.203 "adrfam": "IPv4", 00:20:44.203 "traddr": "10.0.0.1", 00:20:44.203 "trsvcid": "59114" 00:20:44.203 }, 00:20:44.203 "auth": { 00:20:44.203 "state": "completed", 00:20:44.203 "digest": "sha384", 00:20:44.203 "dhgroup": "ffdhe8192" 00:20:44.203 } 00:20:44.203 } 00:20:44.203 ]' 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.203 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.770 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:44.771 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.338 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.596 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.597 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.856 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.423 00:20:46.423 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.423 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.423 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.682 { 00:20:46.682 "cntlid": 95, 00:20:46.682 "qid": 0, 00:20:46.682 "state": "enabled", 00:20:46.682 "thread": "nvmf_tgt_poll_group_000", 00:20:46.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:46.682 "listen_address": { 00:20:46.682 "trtype": "TCP", 00:20:46.682 "adrfam": "IPv4", 00:20:46.682 "traddr": "10.0.0.2", 00:20:46.682 "trsvcid": "4420" 00:20:46.682 }, 00:20:46.682 "peer_address": { 00:20:46.682 "trtype": "TCP", 00:20:46.682 "adrfam": "IPv4", 00:20:46.682 "traddr": "10.0.0.1", 00:20:46.682 "trsvcid": "59150" 00:20:46.682 }, 00:20:46.682 "auth": { 00:20:46.682 "state": "completed", 00:20:46.682 "digest": "sha384", 00:20:46.682 "dhgroup": "ffdhe8192" 00:20:46.682 } 00:20:46.682 } 00:20:46.682 ]' 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.682 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.940 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:46.940 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.144 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.408 00:20:48.408 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.408 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.408 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.666 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.924 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.925 { 00:20:48.925 "cntlid": 97, 00:20:48.925 "qid": 0, 00:20:48.925 "state": "enabled", 00:20:48.925 "thread": "nvmf_tgt_poll_group_000", 00:20:48.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:48.925 "listen_address": { 00:20:48.925 "trtype": "TCP", 00:20:48.925 "adrfam": "IPv4", 00:20:48.925 "traddr": "10.0.0.2", 00:20:48.925 "trsvcid": "4420" 00:20:48.925 }, 00:20:48.925 "peer_address": { 00:20:48.925 "trtype": "TCP", 00:20:48.925 "adrfam": "IPv4", 00:20:48.925 "traddr": "10.0.0.1", 00:20:48.925 "trsvcid": "59178" 00:20:48.925 }, 00:20:48.925 "auth": { 00:20:48.925 "state": "completed", 00:20:48.925 "digest": "sha512", 00:20:48.925 "dhgroup": "null" 00:20:48.925 } 00:20:48.925 } 00:20:48.925 ]' 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.925 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.183 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:49.183 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.119 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.378 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.636 00:20:50.636 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.636 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.636 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.895 { 00:20:50.895 "cntlid": 99, 00:20:50.895 "qid": 0, 00:20:50.895 "state": "enabled", 00:20:50.895 "thread": "nvmf_tgt_poll_group_000", 00:20:50.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:50.895 "listen_address": { 00:20:50.895 "trtype": "TCP", 00:20:50.895 "adrfam": "IPv4", 00:20:50.895 "traddr": "10.0.0.2", 00:20:50.895 "trsvcid": "4420" 00:20:50.895 }, 00:20:50.895 "peer_address": { 00:20:50.895 "trtype": "TCP", 00:20:50.895 "adrfam": "IPv4", 00:20:50.895 "traddr": "10.0.0.1", 00:20:50.895 "trsvcid": "59196" 00:20:50.895 }, 00:20:50.895 "auth": { 00:20:50.895 "state": "completed", 00:20:50.895 "digest": "sha512", 00:20:50.895 "dhgroup": "null" 00:20:50.895 } 00:20:50.895 } 00:20:50.895 ]' 00:20:50.895 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.152 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.410 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:51.410 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:52.344 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.344 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:52.344 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.345 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.345 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.345 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.345 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.345 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.603 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.862 00:20:52.862 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.862 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.862 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.120 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.120 { 00:20:53.120 "cntlid": 101, 00:20:53.120 "qid": 0, 00:20:53.120 "state": "enabled", 00:20:53.120 "thread": "nvmf_tgt_poll_group_000", 00:20:53.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:53.121 "listen_address": { 00:20:53.121 "trtype": "TCP", 00:20:53.121 "adrfam": "IPv4", 00:20:53.121 "traddr": "10.0.0.2", 00:20:53.121 "trsvcid": "4420" 00:20:53.121 }, 00:20:53.121 "peer_address": { 00:20:53.121 "trtype": "TCP", 00:20:53.121 "adrfam": "IPv4", 00:20:53.121 "traddr": "10.0.0.1", 00:20:53.121 "trsvcid": "35744" 00:20:53.121 }, 00:20:53.121 "auth": { 00:20:53.121 "state": "completed", 00:20:53.121 "digest": "sha512", 00:20:53.121 "dhgroup": "null" 00:20:53.121 } 00:20:53.121 } 00:20:53.121 ]' 00:20:53.121 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.121 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.121 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.121 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.121 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.380 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.380 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.380 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.638 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:53.638 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:20:54.205 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.464 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.723 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.981 00:20:54.981 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.981 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.981 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.240 { 00:20:55.240 "cntlid": 103, 00:20:55.240 "qid": 0, 00:20:55.240 "state": "enabled", 00:20:55.240 "thread": "nvmf_tgt_poll_group_000", 00:20:55.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:55.240 "listen_address": { 00:20:55.240 "trtype": "TCP", 00:20:55.240 "adrfam": "IPv4", 00:20:55.240 "traddr": "10.0.0.2", 00:20:55.240 "trsvcid": "4420" 00:20:55.240 }, 00:20:55.240 "peer_address": { 00:20:55.240 "trtype": "TCP", 00:20:55.240 "adrfam": "IPv4", 00:20:55.240 "traddr": "10.0.0.1", 00:20:55.240 "trsvcid": "35770" 00:20:55.240 }, 00:20:55.240 "auth": { 00:20:55.240 "state": "completed", 00:20:55.240 "digest": "sha512", 00:20:55.240 "dhgroup": "null" 00:20:55.240 } 00:20:55.240 } 00:20:55.240 ]' 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.240 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.499 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.499 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.499 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.499 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.499 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.758 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:55.758 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:20:56.693 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.694 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:56.694 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.694 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.694 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.953 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.211 00:20:57.211 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.211 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.211 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.470 { 00:20:57.470 "cntlid": 105, 00:20:57.470 "qid": 0, 00:20:57.470 "state": "enabled", 00:20:57.470 "thread": "nvmf_tgt_poll_group_000", 00:20:57.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:57.470 "listen_address": { 00:20:57.470 "trtype": "TCP", 00:20:57.470 "adrfam": "IPv4", 00:20:57.470 "traddr": "10.0.0.2", 00:20:57.470 "trsvcid": "4420" 00:20:57.470 }, 00:20:57.470 "peer_address": { 00:20:57.470 "trtype": "TCP", 00:20:57.470 "adrfam": "IPv4", 00:20:57.470 "traddr": "10.0.0.1", 00:20:57.470 "trsvcid": "35788" 00:20:57.470 }, 00:20:57.470 "auth": { 00:20:57.470 "state": "completed", 00:20:57.470 "digest": "sha512", 00:20:57.470 "dhgroup": "ffdhe2048" 00:20:57.470 } 00:20:57.470 } 00:20:57.470 ]' 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.470 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.470 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.470 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.470 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.470 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.470 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.038 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:58.038 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.603 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.862 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.863 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.863 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.121 00:20:59.121 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.121 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.121 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.380 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.380 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.380 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.380 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.639 { 00:20:59.639 "cntlid": 107, 00:20:59.639 "qid": 0, 00:20:59.639 "state": "enabled", 00:20:59.639 "thread": "nvmf_tgt_poll_group_000", 00:20:59.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:59.639 "listen_address": { 00:20:59.639 "trtype": "TCP", 00:20:59.639 "adrfam": "IPv4", 00:20:59.639 "traddr": "10.0.0.2", 00:20:59.639 "trsvcid": "4420" 00:20:59.639 }, 00:20:59.639 "peer_address": { 00:20:59.639 "trtype": "TCP", 00:20:59.639 "adrfam": "IPv4", 00:20:59.639 "traddr": "10.0.0.1", 00:20:59.639 "trsvcid": "35808" 00:20:59.639 }, 00:20:59.639 "auth": { 00:20:59.639 "state": "completed", 00:20:59.639 "digest": "sha512", 00:20:59.639 "dhgroup": "ffdhe2048" 00:20:59.639 } 00:20:59.639 } 00:20:59.639 ]' 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.639 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.898 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:20:59.898 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:00.834 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.834 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:00.834 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.835 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.835 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.835 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.835 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.835 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.094 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.353 00:21:01.353 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.353 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.353 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.612 { 00:21:01.612 "cntlid": 109, 00:21:01.612 "qid": 0, 00:21:01.612 "state": "enabled", 00:21:01.612 "thread": "nvmf_tgt_poll_group_000", 00:21:01.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:01.612 "listen_address": { 00:21:01.612 "trtype": "TCP", 00:21:01.612 "adrfam": "IPv4", 00:21:01.612 "traddr": "10.0.0.2", 00:21:01.612 "trsvcid": "4420" 00:21:01.612 }, 00:21:01.612 "peer_address": { 00:21:01.612 "trtype": "TCP", 00:21:01.612 "adrfam": "IPv4", 00:21:01.612 "traddr": "10.0.0.1", 00:21:01.612 "trsvcid": "40912" 00:21:01.612 }, 00:21:01.612 "auth": { 00:21:01.612 "state": "completed", 00:21:01.612 "digest": "sha512", 00:21:01.612 "dhgroup": "ffdhe2048" 00:21:01.612 } 00:21:01.612 } 00:21:01.612 ]' 00:21:01.612 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.871 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.130 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:02.130 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.066 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.067 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:03.067 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.067 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.325 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.326 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.326 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.326 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.584 00:21:03.584 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.584 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.584 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.842 { 00:21:03.842 "cntlid": 111, 00:21:03.842 "qid": 0, 00:21:03.842 "state": "enabled", 00:21:03.842 "thread": "nvmf_tgt_poll_group_000", 00:21:03.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:03.842 "listen_address": { 00:21:03.842 "trtype": "TCP", 00:21:03.842 "adrfam": "IPv4", 00:21:03.842 "traddr": "10.0.0.2", 00:21:03.842 "trsvcid": "4420" 00:21:03.842 }, 00:21:03.842 "peer_address": { 00:21:03.842 "trtype": "TCP", 00:21:03.842 "adrfam": "IPv4", 00:21:03.842 "traddr": "10.0.0.1", 00:21:03.842 "trsvcid": "40932" 00:21:03.842 }, 00:21:03.842 "auth": { 00:21:03.842 "state": "completed", 00:21:03.842 "digest": "sha512", 00:21:03.842 "dhgroup": "ffdhe2048" 00:21:03.842 } 00:21:03.842 } 00:21:03.842 ]' 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.842 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.100 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:04.100 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.036 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.295 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.569 00:21:05.569 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.569 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.569 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.842 { 00:21:05.842 "cntlid": 113, 00:21:05.842 "qid": 0, 00:21:05.842 "state": "enabled", 00:21:05.842 "thread": "nvmf_tgt_poll_group_000", 00:21:05.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:05.842 "listen_address": { 00:21:05.842 "trtype": "TCP", 00:21:05.842 "adrfam": "IPv4", 00:21:05.842 "traddr": "10.0.0.2", 00:21:05.842 "trsvcid": "4420" 00:21:05.842 }, 00:21:05.842 "peer_address": { 00:21:05.842 "trtype": "TCP", 00:21:05.842 "adrfam": "IPv4", 00:21:05.842 "traddr": "10.0.0.1", 00:21:05.842 "trsvcid": "40958" 00:21:05.842 }, 00:21:05.842 "auth": { 00:21:05.842 "state": "completed", 00:21:05.842 "digest": "sha512", 00:21:05.842 "dhgroup": "ffdhe3072" 00:21:05.842 } 00:21:05.842 } 00:21:05.842 ]' 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.842 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.153 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.153 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.153 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.153 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.153 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.444 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:06.444 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.019 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.278 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.536 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.536 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.536 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.536 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.795 00:21:07.795 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.795 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.795 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.054 { 00:21:08.054 "cntlid": 115, 00:21:08.054 "qid": 0, 00:21:08.054 "state": "enabled", 00:21:08.054 "thread": "nvmf_tgt_poll_group_000", 00:21:08.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:08.054 "listen_address": { 00:21:08.054 "trtype": "TCP", 00:21:08.054 "adrfam": "IPv4", 00:21:08.054 "traddr": "10.0.0.2", 00:21:08.054 "trsvcid": "4420" 00:21:08.054 }, 00:21:08.054 "peer_address": { 00:21:08.054 "trtype": "TCP", 00:21:08.054 "adrfam": "IPv4", 00:21:08.054 "traddr": "10.0.0.1", 00:21:08.054 "trsvcid": "40974" 00:21:08.054 }, 00:21:08.054 "auth": { 00:21:08.054 "state": "completed", 00:21:08.054 "digest": "sha512", 00:21:08.054 "dhgroup": "ffdhe3072" 00:21:08.054 } 00:21:08.054 } 00:21:08.054 ]' 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.054 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.312 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.312 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.312 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.570 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:08.570 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.137 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.396 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.654 00:21:09.913 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.913 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.913 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.172 { 00:21:10.172 "cntlid": 117, 00:21:10.172 "qid": 0, 00:21:10.172 "state": "enabled", 00:21:10.172 "thread": "nvmf_tgt_poll_group_000", 00:21:10.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:10.172 "listen_address": { 00:21:10.172 "trtype": "TCP", 00:21:10.172 "adrfam": "IPv4", 00:21:10.172 "traddr": "10.0.0.2", 00:21:10.172 "trsvcid": "4420" 00:21:10.172 }, 00:21:10.172 "peer_address": { 00:21:10.172 "trtype": "TCP", 00:21:10.172 "adrfam": "IPv4", 00:21:10.172 "traddr": "10.0.0.1", 00:21:10.172 "trsvcid": "40992" 00:21:10.172 }, 00:21:10.172 "auth": { 00:21:10.172 "state": "completed", 00:21:10.172 "digest": "sha512", 00:21:10.172 "dhgroup": "ffdhe3072" 00:21:10.172 } 00:21:10.172 } 00:21:10.172 ]' 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.172 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.431 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:10.431 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.625 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.884 00:21:11.884 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.884 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.884 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.142 { 00:21:12.142 "cntlid": 119, 00:21:12.142 "qid": 0, 00:21:12.142 "state": "enabled", 00:21:12.142 "thread": "nvmf_tgt_poll_group_000", 00:21:12.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:12.142 "listen_address": { 00:21:12.142 "trtype": "TCP", 00:21:12.142 "adrfam": "IPv4", 00:21:12.142 "traddr": "10.0.0.2", 00:21:12.142 "trsvcid": "4420" 00:21:12.142 }, 00:21:12.142 "peer_address": { 00:21:12.142 "trtype": "TCP", 00:21:12.142 "adrfam": "IPv4", 00:21:12.142 "traddr": "10.0.0.1", 00:21:12.142 "trsvcid": "38816" 00:21:12.142 }, 00:21:12.142 "auth": { 00:21:12.142 "state": "completed", 00:21:12.142 "digest": "sha512", 00:21:12.142 "dhgroup": "ffdhe3072" 00:21:12.142 } 00:21:12.142 } 00:21:12.142 ]' 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.142 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.401 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.401 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.401 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.660 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:12.660 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.596 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.596 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.164 00:21:14.164 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.164 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.164 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.423 { 00:21:14.423 "cntlid": 121, 00:21:14.423 "qid": 0, 00:21:14.423 "state": "enabled", 00:21:14.423 "thread": "nvmf_tgt_poll_group_000", 00:21:14.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:14.423 "listen_address": { 00:21:14.423 "trtype": "TCP", 00:21:14.423 "adrfam": "IPv4", 00:21:14.423 "traddr": "10.0.0.2", 00:21:14.423 "trsvcid": "4420" 00:21:14.423 }, 00:21:14.423 "peer_address": { 00:21:14.423 "trtype": "TCP", 00:21:14.423 "adrfam": "IPv4", 00:21:14.423 "traddr": "10.0.0.1", 00:21:14.423 "trsvcid": "38834" 00:21:14.423 }, 00:21:14.423 "auth": { 00:21:14.423 "state": "completed", 00:21:14.423 "digest": "sha512", 00:21:14.423 "dhgroup": "ffdhe4096" 00:21:14.423 } 00:21:14.423 } 00:21:14.423 ]' 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.423 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.682 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:14.682 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.618 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.877 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.444 00:21:16.444 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.444 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.444 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.444 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.444 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.703 { 00:21:16.703 "cntlid": 123, 00:21:16.703 "qid": 0, 00:21:16.703 "state": "enabled", 00:21:16.703 "thread": "nvmf_tgt_poll_group_000", 00:21:16.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:16.703 "listen_address": { 00:21:16.703 "trtype": "TCP", 00:21:16.703 "adrfam": "IPv4", 00:21:16.703 "traddr": "10.0.0.2", 00:21:16.703 "trsvcid": "4420" 00:21:16.703 }, 00:21:16.703 "peer_address": { 00:21:16.703 "trtype": "TCP", 00:21:16.703 "adrfam": "IPv4", 00:21:16.703 "traddr": "10.0.0.1", 00:21:16.703 "trsvcid": "38870" 00:21:16.703 }, 00:21:16.703 "auth": { 00:21:16.703 "state": "completed", 00:21:16.703 "digest": "sha512", 00:21:16.703 "dhgroup": "ffdhe4096" 00:21:16.703 } 00:21:16.703 } 00:21:16.703 ]' 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.703 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.961 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:16.961 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.897 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.156 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.415 00:21:18.415 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.415 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.415 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.674 { 00:21:18.674 "cntlid": 125, 00:21:18.674 "qid": 0, 00:21:18.674 "state": "enabled", 00:21:18.674 "thread": "nvmf_tgt_poll_group_000", 00:21:18.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:18.674 "listen_address": { 00:21:18.674 "trtype": "TCP", 00:21:18.674 "adrfam": "IPv4", 00:21:18.674 "traddr": "10.0.0.2", 00:21:18.674 "trsvcid": "4420" 00:21:18.674 }, 00:21:18.674 "peer_address": { 00:21:18.674 "trtype": "TCP", 00:21:18.674 "adrfam": "IPv4", 00:21:18.674 "traddr": "10.0.0.1", 00:21:18.674 "trsvcid": "38898" 00:21:18.674 }, 00:21:18.674 "auth": { 00:21:18.674 "state": "completed", 00:21:18.674 "digest": "sha512", 00:21:18.674 "dhgroup": "ffdhe4096" 00:21:18.674 } 00:21:18.674 } 00:21:18.674 ]' 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.674 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.933 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.933 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.933 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.933 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.933 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.192 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:19.192 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.128 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.387 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.646 00:21:20.646 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.646 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.646 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.905 { 00:21:20.905 "cntlid": 127, 00:21:20.905 "qid": 0, 00:21:20.905 "state": "enabled", 00:21:20.905 "thread": "nvmf_tgt_poll_group_000", 00:21:20.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:20.905 "listen_address": { 00:21:20.905 "trtype": "TCP", 00:21:20.905 "adrfam": "IPv4", 00:21:20.905 "traddr": "10.0.0.2", 00:21:20.905 "trsvcid": "4420" 00:21:20.905 }, 00:21:20.905 "peer_address": { 00:21:20.905 "trtype": "TCP", 00:21:20.905 "adrfam": "IPv4", 00:21:20.905 "traddr": "10.0.0.1", 00:21:20.905 "trsvcid": "38910" 00:21:20.905 }, 00:21:20.905 "auth": { 00:21:20.905 "state": "completed", 00:21:20.905 "digest": "sha512", 00:21:20.905 "dhgroup": "ffdhe4096" 00:21:20.905 } 00:21:20.905 } 00:21:20.905 ]' 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.905 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.163 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.163 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.163 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.163 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.421 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:21.421 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.357 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.924 00:21:22.924 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.924 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.924 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.183 { 00:21:23.183 "cntlid": 129, 00:21:23.183 "qid": 0, 00:21:23.183 "state": "enabled", 00:21:23.183 "thread": "nvmf_tgt_poll_group_000", 00:21:23.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:23.183 "listen_address": { 00:21:23.183 "trtype": "TCP", 00:21:23.183 "adrfam": "IPv4", 00:21:23.183 "traddr": "10.0.0.2", 00:21:23.183 "trsvcid": "4420" 00:21:23.183 }, 00:21:23.183 "peer_address": { 00:21:23.183 "trtype": "TCP", 00:21:23.183 "adrfam": "IPv4", 00:21:23.183 "traddr": "10.0.0.1", 00:21:23.183 "trsvcid": "57526" 00:21:23.183 }, 00:21:23.183 "auth": { 00:21:23.183 "state": "completed", 00:21:23.183 "digest": "sha512", 00:21:23.183 "dhgroup": "ffdhe6144" 00:21:23.183 } 00:21:23.183 } 00:21:23.183 ]' 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.183 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.442 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.442 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.442 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.442 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.442 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.700 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:23.700 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.397 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.655 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.656 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.914 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.914 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.915 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.174 00:21:25.174 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.174 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.174 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.432 { 00:21:25.432 "cntlid": 131, 00:21:25.432 "qid": 0, 00:21:25.432 "state": "enabled", 00:21:25.432 "thread": "nvmf_tgt_poll_group_000", 00:21:25.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:25.432 "listen_address": { 00:21:25.432 "trtype": "TCP", 00:21:25.432 "adrfam": "IPv4", 00:21:25.432 "traddr": "10.0.0.2", 00:21:25.432 "trsvcid": "4420" 00:21:25.432 }, 00:21:25.432 "peer_address": { 00:21:25.432 "trtype": "TCP", 00:21:25.432 "adrfam": "IPv4", 00:21:25.432 "traddr": "10.0.0.1", 00:21:25.432 "trsvcid": "57552" 00:21:25.432 }, 00:21:25.432 "auth": { 00:21:25.432 "state": "completed", 00:21:25.432 "digest": "sha512", 00:21:25.432 "dhgroup": "ffdhe6144" 00:21:25.432 } 00:21:25.432 } 00:21:25.432 ]' 00:21:25.432 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.690 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.691 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.949 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:25.949 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.884 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.143 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.401 00:21:27.660 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.660 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.660 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.918 { 00:21:27.918 "cntlid": 133, 00:21:27.918 "qid": 0, 00:21:27.918 "state": "enabled", 00:21:27.918 "thread": "nvmf_tgt_poll_group_000", 00:21:27.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:27.918 "listen_address": { 00:21:27.918 "trtype": "TCP", 00:21:27.918 "adrfam": "IPv4", 00:21:27.918 "traddr": "10.0.0.2", 00:21:27.918 "trsvcid": "4420" 00:21:27.918 }, 00:21:27.918 "peer_address": { 00:21:27.918 "trtype": "TCP", 00:21:27.918 "adrfam": "IPv4", 00:21:27.918 "traddr": "10.0.0.1", 00:21:27.918 "trsvcid": "57574" 00:21:27.918 }, 00:21:27.918 "auth": { 00:21:27.918 "state": "completed", 00:21:27.918 "digest": "sha512", 00:21:27.918 "dhgroup": "ffdhe6144" 00:21:27.918 } 00:21:27.918 } 00:21:27.918 ]' 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.918 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.177 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:28.177 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:29.111 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.112 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.370 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.629 00:21:29.888 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.888 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.888 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.146 { 00:21:30.146 "cntlid": 135, 00:21:30.146 "qid": 0, 00:21:30.146 "state": "enabled", 00:21:30.146 "thread": "nvmf_tgt_poll_group_000", 00:21:30.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:30.146 "listen_address": { 00:21:30.146 "trtype": "TCP", 00:21:30.146 "adrfam": "IPv4", 00:21:30.146 "traddr": "10.0.0.2", 00:21:30.146 "trsvcid": "4420" 00:21:30.146 }, 00:21:30.146 "peer_address": { 00:21:30.146 "trtype": "TCP", 00:21:30.146 "adrfam": "IPv4", 00:21:30.146 "traddr": "10.0.0.1", 00:21:30.146 "trsvcid": "57608" 00:21:30.146 }, 00:21:30.146 "auth": { 00:21:30.146 "state": "completed", 00:21:30.146 "digest": "sha512", 00:21:30.146 "dhgroup": "ffdhe6144" 00:21:30.146 } 00:21:30.146 } 00:21:30.146 ]' 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.146 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.405 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:30.405 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.341 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.600 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.600 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.167 00:21:32.167 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.167 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.167 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.426 { 00:21:32.426 "cntlid": 137, 00:21:32.426 "qid": 0, 00:21:32.426 "state": "enabled", 00:21:32.426 "thread": "nvmf_tgt_poll_group_000", 00:21:32.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:32.426 "listen_address": { 00:21:32.426 "trtype": "TCP", 00:21:32.426 "adrfam": "IPv4", 00:21:32.426 "traddr": "10.0.0.2", 00:21:32.426 "trsvcid": "4420" 00:21:32.426 }, 00:21:32.426 "peer_address": { 00:21:32.426 "trtype": "TCP", 00:21:32.426 "adrfam": "IPv4", 00:21:32.426 "traddr": "10.0.0.1", 00:21:32.426 "trsvcid": "41104" 00:21:32.426 }, 00:21:32.426 "auth": { 00:21:32.426 "state": "completed", 00:21:32.426 "digest": "sha512", 00:21:32.426 "dhgroup": "ffdhe8192" 00:21:32.426 } 00:21:32.426 } 00:21:32.426 ]' 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.426 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.426 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.426 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.685 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.685 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.685 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.944 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:32.944 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:33.880 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.880 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.881 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.817 00:21:34.817 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.817 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.818 { 00:21:34.818 "cntlid": 139, 00:21:34.818 "qid": 0, 00:21:34.818 "state": "enabled", 00:21:34.818 "thread": "nvmf_tgt_poll_group_000", 00:21:34.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:34.818 "listen_address": { 00:21:34.818 "trtype": "TCP", 00:21:34.818 "adrfam": "IPv4", 00:21:34.818 "traddr": "10.0.0.2", 00:21:34.818 "trsvcid": "4420" 00:21:34.818 }, 00:21:34.818 "peer_address": { 00:21:34.818 "trtype": "TCP", 00:21:34.818 "adrfam": "IPv4", 00:21:34.818 "traddr": "10.0.0.1", 00:21:34.818 "trsvcid": "41114" 00:21:34.818 }, 00:21:34.818 "auth": { 00:21:34.818 "state": "completed", 00:21:34.818 "digest": "sha512", 00:21:34.818 "dhgroup": "ffdhe8192" 00:21:34.818 } 00:21:34.818 } 00:21:34.818 ]' 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.818 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.076 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.076 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.076 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.076 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.076 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.336 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:35.336 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: --dhchap-ctrl-secret DHHC-1:02:MmE0YmM5YTFmZDU3NGJhNjIyNjhkNDJhZDViYTA1YzYxNmI0N2JiNWFlZDVjY2I4rY8a4w==: 00:21:36.272 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.272 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:36.272 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.272 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.273 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.273 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.273 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.273 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.531 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.098 00:21:37.098 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.098 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.098 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.356 { 00:21:37.356 "cntlid": 141, 00:21:37.356 "qid": 0, 00:21:37.356 "state": "enabled", 00:21:37.356 "thread": "nvmf_tgt_poll_group_000", 00:21:37.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:37.356 "listen_address": { 00:21:37.356 "trtype": "TCP", 00:21:37.356 "adrfam": "IPv4", 00:21:37.356 "traddr": "10.0.0.2", 00:21:37.356 "trsvcid": "4420" 00:21:37.356 }, 00:21:37.356 "peer_address": { 00:21:37.356 "trtype": "TCP", 00:21:37.356 "adrfam": "IPv4", 00:21:37.356 "traddr": "10.0.0.1", 00:21:37.356 "trsvcid": "41142" 00:21:37.356 }, 00:21:37.356 "auth": { 00:21:37.356 "state": "completed", 00:21:37.356 "digest": "sha512", 00:21:37.356 "dhgroup": "ffdhe8192" 00:21:37.356 } 00:21:37.356 } 00:21:37.356 ]' 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.356 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.614 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.614 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.614 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.872 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:37.872 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:01:NzRkMmIxNDZiYTNlNDQyNmZjMWYzMWJhNGRjNjhjMjSNU6ah: 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.441 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.700 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.267 00:21:39.267 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.267 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.267 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.785 { 00:21:39.785 "cntlid": 143, 00:21:39.785 "qid": 0, 00:21:39.785 "state": "enabled", 00:21:39.785 "thread": "nvmf_tgt_poll_group_000", 00:21:39.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:39.785 "listen_address": { 00:21:39.785 "trtype": "TCP", 00:21:39.785 "adrfam": "IPv4", 00:21:39.785 "traddr": "10.0.0.2", 00:21:39.785 "trsvcid": "4420" 00:21:39.785 }, 00:21:39.785 "peer_address": { 00:21:39.785 "trtype": "TCP", 00:21:39.785 "adrfam": "IPv4", 00:21:39.785 "traddr": "10.0.0.1", 00:21:39.785 "trsvcid": "41168" 00:21:39.785 }, 00:21:39.785 "auth": { 00:21:39.785 "state": "completed", 00:21:39.785 "digest": "sha512", 00:21:39.785 "dhgroup": "ffdhe8192" 00:21:39.785 } 00:21:39.785 } 00:21:39.785 ]' 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.785 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.044 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:40.044 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.979 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.919 00:21:41.919 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.920 { 00:21:41.920 "cntlid": 145, 00:21:41.920 "qid": 0, 00:21:41.920 "state": "enabled", 00:21:41.920 "thread": "nvmf_tgt_poll_group_000", 00:21:41.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:41.920 "listen_address": { 00:21:41.920 "trtype": "TCP", 00:21:41.920 "adrfam": "IPv4", 00:21:41.920 "traddr": "10.0.0.2", 00:21:41.920 "trsvcid": "4420" 00:21:41.920 }, 00:21:41.920 "peer_address": { 00:21:41.920 "trtype": "TCP", 00:21:41.920 "adrfam": "IPv4", 00:21:41.920 "traddr": "10.0.0.1", 00:21:41.920 "trsvcid": "55404" 00:21:41.920 }, 00:21:41.920 "auth": { 00:21:41.920 "state": "completed", 00:21:41.920 "digest": "sha512", 00:21:41.920 "dhgroup": "ffdhe8192" 00:21:41.920 } 00:21:41.920 } 00:21:41.920 ]' 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.920 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.178 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.178 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.178 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.178 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.178 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.436 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:42.436 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTYyODhhMDVmNmZkYzE5YWRkMzQ3MmM2N2EzNTUzMzM4M2ExN2MwNGMwZjgwNTUy2kySRQ==: --dhchap-ctrl-secret DHHC-1:03:NWUwMzY1OTM4OTI5YzdhNzM2ZmZjMmFkNGNiODQxODNkNDdiYjMxNmIzNDdlNjQwYzAzOWU1Y2FiZjkxYjJhNTtoCWg=: 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:43.372 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:43.940 request: 00:21:43.940 { 00:21:43.940 "name": "nvme0", 00:21:43.940 "trtype": "tcp", 00:21:43.940 "traddr": "10.0.0.2", 00:21:43.940 "adrfam": "ipv4", 00:21:43.940 "trsvcid": "4420", 00:21:43.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:43.940 "prchk_reftag": false, 00:21:43.940 "prchk_guard": false, 00:21:43.940 "hdgst": false, 00:21:43.940 "ddgst": false, 00:21:43.940 "dhchap_key": "key2", 00:21:43.940 "allow_unrecognized_csi": false, 00:21:43.940 "method": "bdev_nvme_attach_controller", 00:21:43.940 "req_id": 1 00:21:43.940 } 00:21:43.940 Got JSON-RPC error response 00:21:43.940 response: 00:21:43.940 { 00:21:43.940 "code": -5, 00:21:43.940 "message": "Input/output error" 00:21:43.940 } 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:43.940 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.508 request: 00:21:44.508 { 00:21:44.508 "name": "nvme0", 00:21:44.508 "trtype": "tcp", 00:21:44.508 "traddr": "10.0.0.2", 00:21:44.508 "adrfam": "ipv4", 00:21:44.508 "trsvcid": "4420", 00:21:44.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:44.508 "prchk_reftag": false, 00:21:44.508 "prchk_guard": false, 00:21:44.508 "hdgst": false, 00:21:44.508 "ddgst": false, 00:21:44.508 "dhchap_key": "key1", 00:21:44.508 "dhchap_ctrlr_key": "ckey2", 00:21:44.508 "allow_unrecognized_csi": false, 00:21:44.508 "method": "bdev_nvme_attach_controller", 00:21:44.508 "req_id": 1 00:21:44.508 } 00:21:44.508 Got JSON-RPC error response 00:21:44.508 response: 00:21:44.508 { 00:21:44.508 "code": -5, 00:21:44.508 "message": "Input/output error" 00:21:44.508 } 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.508 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.076 request: 00:21:45.076 { 00:21:45.076 "name": "nvme0", 00:21:45.076 "trtype": "tcp", 00:21:45.076 "traddr": "10.0.0.2", 00:21:45.076 "adrfam": "ipv4", 00:21:45.076 "trsvcid": "4420", 00:21:45.076 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:45.076 "prchk_reftag": false, 00:21:45.076 "prchk_guard": false, 00:21:45.076 "hdgst": false, 00:21:45.076 "ddgst": false, 00:21:45.076 "dhchap_key": "key1", 00:21:45.076 "dhchap_ctrlr_key": "ckey1", 00:21:45.076 "allow_unrecognized_csi": false, 00:21:45.076 "method": "bdev_nvme_attach_controller", 00:21:45.076 "req_id": 1 00:21:45.076 } 00:21:45.076 Got JSON-RPC error response 00:21:45.076 response: 00:21:45.076 { 00:21:45.076 "code": -5, 00:21:45.076 "message": "Input/output error" 00:21:45.076 } 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 157742 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 157742 ']' 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 157742 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 157742 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 157742' 00:21:45.076 killing process with pid 157742 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 157742 00:21:45.076 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 157742 00:21:45.335 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:45.335 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.335 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.335 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=188615 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 188615 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 188615 ']' 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:45.336 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 188615 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 188615 ']' 00:21:45.594 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.595 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:45.595 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.595 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:45.595 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:45.853 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:45.853 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:45.853 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.853 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.112 null0 00:21:46.112 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Oxc 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.kIv ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIv 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VhD 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.pY1 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pY1 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iFo 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.K2c ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K2c 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ka4 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.113 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.049 nvme0n1 00:21:47.049 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.049 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.049 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.308 { 00:21:47.308 "cntlid": 1, 00:21:47.308 "qid": 0, 00:21:47.308 "state": "enabled", 00:21:47.308 "thread": "nvmf_tgt_poll_group_000", 00:21:47.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:47.308 "listen_address": { 00:21:47.308 "trtype": "TCP", 00:21:47.308 "adrfam": "IPv4", 00:21:47.308 "traddr": "10.0.0.2", 00:21:47.308 "trsvcid": "4420" 00:21:47.308 }, 00:21:47.308 "peer_address": { 00:21:47.308 "trtype": "TCP", 00:21:47.308 "adrfam": "IPv4", 00:21:47.308 "traddr": "10.0.0.1", 00:21:47.308 "trsvcid": "55452" 00:21:47.308 }, 00:21:47.308 "auth": { 00:21:47.308 "state": "completed", 00:21:47.308 "digest": "sha512", 00:21:47.308 "dhgroup": "ffdhe8192" 00:21:47.308 } 00:21:47.308 } 00:21:47.308 ]' 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.308 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.567 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.567 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.567 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.825 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:47.825 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.761 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.019 request: 00:21:49.019 { 00:21:49.019 "name": "nvme0", 00:21:49.019 "trtype": "tcp", 00:21:49.019 "traddr": "10.0.0.2", 00:21:49.019 "adrfam": "ipv4", 00:21:49.019 "trsvcid": "4420", 00:21:49.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:49.019 "prchk_reftag": false, 00:21:49.019 "prchk_guard": false, 00:21:49.020 "hdgst": false, 00:21:49.020 "ddgst": false, 00:21:49.020 "dhchap_key": "key3", 00:21:49.020 "allow_unrecognized_csi": false, 00:21:49.020 "method": "bdev_nvme_attach_controller", 00:21:49.020 "req_id": 1 00:21:49.020 } 00:21:49.020 Got JSON-RPC error response 00:21:49.020 response: 00:21:49.020 { 00:21:49.020 "code": -5, 00:21:49.020 "message": "Input/output error" 00:21:49.020 } 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:49.279 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.538 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.797 request: 00:21:49.797 { 00:21:49.797 "name": "nvme0", 00:21:49.797 "trtype": "tcp", 00:21:49.797 "traddr": "10.0.0.2", 00:21:49.797 "adrfam": "ipv4", 00:21:49.797 "trsvcid": "4420", 00:21:49.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:49.797 "prchk_reftag": false, 00:21:49.797 "prchk_guard": false, 00:21:49.797 "hdgst": false, 00:21:49.797 "ddgst": false, 00:21:49.797 "dhchap_key": "key3", 00:21:49.797 "allow_unrecognized_csi": false, 00:21:49.797 "method": "bdev_nvme_attach_controller", 00:21:49.797 "req_id": 1 00:21:49.797 } 00:21:49.797 Got JSON-RPC error response 00:21:49.797 response: 00:21:49.797 { 00:21:49.797 "code": -5, 00:21:49.797 "message": "Input/output error" 00:21:49.797 } 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:49.797 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.056 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.057 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.057 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.315 request: 00:21:50.315 { 00:21:50.315 "name": "nvme0", 00:21:50.315 "trtype": "tcp", 00:21:50.315 "traddr": "10.0.0.2", 00:21:50.315 "adrfam": "ipv4", 00:21:50.315 "trsvcid": "4420", 00:21:50.315 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:50.315 "prchk_reftag": false, 00:21:50.315 "prchk_guard": false, 00:21:50.315 "hdgst": false, 00:21:50.315 "ddgst": false, 00:21:50.315 "dhchap_key": "key0", 00:21:50.315 "dhchap_ctrlr_key": "key1", 00:21:50.315 "allow_unrecognized_csi": false, 00:21:50.315 "method": "bdev_nvme_attach_controller", 00:21:50.315 "req_id": 1 00:21:50.315 } 00:21:50.315 Got JSON-RPC error response 00:21:50.315 response: 00:21:50.315 { 00:21:50.315 "code": -5, 00:21:50.315 "message": "Input/output error" 00:21:50.315 } 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:50.315 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:50.883 nvme0n1 00:21:50.883 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:50.883 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:50.883 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.142 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.142 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.142 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:51.401 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:52.337 nvme0n1 00:21:52.337 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:52.337 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:52.337 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.596 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:52.855 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.855 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:52.855 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: --dhchap-ctrl-secret DHHC-1:03:NDBiYWJmNjQwMmI4ZjQwN2VmYzBkYzE3NDUxMGFmN2YzZDkyYjZhOWUyYWFmZjg5MDY4YWM4MGQ3NzQ3Njg4NGtVKPc=: 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.423 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:53.681 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:54.248 request: 00:21:54.248 { 00:21:54.248 "name": "nvme0", 00:21:54.248 "trtype": "tcp", 00:21:54.248 "traddr": "10.0.0.2", 00:21:54.248 "adrfam": "ipv4", 00:21:54.248 "trsvcid": "4420", 00:21:54.248 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:54.248 "prchk_reftag": false, 00:21:54.248 "prchk_guard": false, 00:21:54.248 "hdgst": false, 00:21:54.248 "ddgst": false, 00:21:54.248 "dhchap_key": "key1", 00:21:54.248 "allow_unrecognized_csi": false, 00:21:54.248 "method": "bdev_nvme_attach_controller", 00:21:54.248 "req_id": 1 00:21:54.248 } 00:21:54.248 Got JSON-RPC error response 00:21:54.248 response: 00:21:54.248 { 00:21:54.248 "code": -5, 00:21:54.248 "message": "Input/output error" 00:21:54.248 } 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.248 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.184 nvme0n1 00:21:55.184 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:55.184 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:55.184 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.443 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.443 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.443 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:55.701 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:55.960 nvme0n1 00:21:55.960 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:55.960 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:55.960 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.218 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.218 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.218 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: '' 2s 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:56.477 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: ]] 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjNlNDAzZDE0ZDYzNzAzMjQ1YzI5ZTY5Nzc1YzgwNzW8Vwd6: 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:56.478 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: 2s 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: ]] 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmRkOTQyNDA4ZDM3YzMwODZhYjdhNzUzNDA2ZTVlY2U2MmU3OWQ4ZGE0OTE5YmNlxmkk4g==: 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:59.010 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:00.913 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.914 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:01.481 nvme0n1 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.740 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.307 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:02.307 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:02.307 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.565 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:02.565 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:02.824 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:02.824 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:02.824 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.083 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.651 request: 00:22:03.651 { 00:22:03.651 "name": "nvme0", 00:22:03.651 "dhchap_key": "key1", 00:22:03.651 "dhchap_ctrlr_key": "key3", 00:22:03.651 "method": "bdev_nvme_set_keys", 00:22:03.651 "req_id": 1 00:22:03.651 } 00:22:03.651 Got JSON-RPC error response 00:22:03.651 response: 00:22:03.651 { 00:22:03.651 "code": -13, 00:22:03.651 "message": "Permission denied" 00:22:03.651 } 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:03.651 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.909 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:03.909 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:05.285 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:06.228 nvme0n1 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.228 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.910 request: 00:22:06.910 { 00:22:06.910 "name": "nvme0", 00:22:06.910 "dhchap_key": "key2", 00:22:06.910 "dhchap_ctrlr_key": "key0", 00:22:06.910 "method": "bdev_nvme_set_keys", 00:22:06.910 "req_id": 1 00:22:06.910 } 00:22:06.910 Got JSON-RPC error response 00:22:06.910 response: 00:22:06.910 { 00:22:06.910 "code": -13, 00:22:06.910 "message": "Permission denied" 00:22:06.910 } 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:06.910 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.253 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:07.253 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:08.192 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:08.192 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:08.193 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 157791 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 157791 ']' 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 157791 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 157791 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 157791' 00:22:08.453 killing process with pid 157791 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 157791 00:22:08.453 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 157791 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.712 rmmod nvme_tcp 00:22:08.712 rmmod nvme_fabrics 00:22:08.712 rmmod nvme_keyring 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 188615 ']' 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 188615 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 188615 ']' 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 188615 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:08.712 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 188615 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 188615' 00:22:08.971 killing process with pid 188615 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 188615 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 188615 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.971 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.508 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.508 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Oxc /tmp/spdk.key-sha256.VhD /tmp/spdk.key-sha384.iFo /tmp/spdk.key-sha512.ka4 /tmp/spdk.key-sha512.kIv /tmp/spdk.key-sha384.pY1 /tmp/spdk.key-sha256.K2c '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:11.508 00:22:11.508 real 3m16.017s 00:22:11.508 user 7m42.558s 00:22:11.508 sys 0m27.269s 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 ************************************ 00:22:11.509 END TEST nvmf_auth_target 00:22:11.509 ************************************ 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 ************************************ 00:22:11.509 START TEST nvmf_bdevio_no_huge 00:22:11.509 ************************************ 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:11.509 * Looking for test storage... 00:22:11.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.509 --rc genhtml_branch_coverage=1 00:22:11.509 --rc genhtml_function_coverage=1 00:22:11.509 --rc genhtml_legend=1 00:22:11.509 --rc geninfo_all_blocks=1 00:22:11.509 --rc geninfo_unexecuted_blocks=1 00:22:11.509 00:22:11.509 ' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.509 --rc genhtml_branch_coverage=1 00:22:11.509 --rc genhtml_function_coverage=1 00:22:11.509 --rc genhtml_legend=1 00:22:11.509 --rc geninfo_all_blocks=1 00:22:11.509 --rc geninfo_unexecuted_blocks=1 00:22:11.509 00:22:11.509 ' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.509 --rc genhtml_branch_coverage=1 00:22:11.509 --rc genhtml_function_coverage=1 00:22:11.509 --rc genhtml_legend=1 00:22:11.509 --rc geninfo_all_blocks=1 00:22:11.509 --rc geninfo_unexecuted_blocks=1 00:22:11.509 00:22:11.509 ' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:11.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.509 --rc genhtml_branch_coverage=1 00:22:11.509 --rc genhtml_function_coverage=1 00:22:11.509 --rc genhtml_legend=1 00:22:11.509 --rc geninfo_all_blocks=1 00:22:11.509 --rc geninfo_unexecuted_blocks=1 00:22:11.509 00:22:11.509 ' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.509 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.510 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.784 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.784 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.784 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.784 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:16.785 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:16.785 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:16.785 Found net devices under 0000:af:00.0: cvl_0_0 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:16.785 Found net devices under 0000:af:00.1: cvl_0_1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:16.785 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:22:17.045 00:22:17.045 --- 10.0.0.2 ping statistics --- 00:22:17.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.045 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:17.045 00:22:17.045 --- 10.0.0.1 ping statistics --- 00:22:17.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.045 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=196659 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 196659 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 196659 ']' 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.045 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.045 [2024-11-06 12:28:48.571565] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:17.045 [2024-11-06 12:28:48.571628] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:17.045 [2024-11-06 12:28:48.655784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.305 [2024-11-06 12:28:48.699566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.305 [2024-11-06 12:28:48.699599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.305 [2024-11-06 12:28:48.699606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.305 [2024-11-06 12:28:48.699611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.305 [2024-11-06 12:28:48.699615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.305 [2024-11-06 12:28:48.700667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:17.305 [2024-11-06 12:28:48.700755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:17.305 [2024-11-06 12:28:48.700864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.305 [2024-11-06 12:28:48.700865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 [2024-11-06 12:28:48.843444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 Malloc0 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.305 [2024-11-06 12:28:48.888124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.305 { 00:22:17.305 "params": { 00:22:17.305 "name": "Nvme$subsystem", 00:22:17.305 "trtype": "$TEST_TRANSPORT", 00:22:17.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.305 "adrfam": "ipv4", 00:22:17.305 "trsvcid": "$NVMF_PORT", 00:22:17.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.305 "hdgst": ${hdgst:-false}, 00:22:17.305 "ddgst": ${ddgst:-false} 00:22:17.305 }, 00:22:17.305 "method": "bdev_nvme_attach_controller" 00:22:17.305 } 00:22:17.305 EOF 00:22:17.305 )") 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:17.305 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:17.306 "params": { 00:22:17.306 "name": "Nvme1", 00:22:17.306 "trtype": "tcp", 00:22:17.306 "traddr": "10.0.0.2", 00:22:17.306 "adrfam": "ipv4", 00:22:17.306 "trsvcid": "4420", 00:22:17.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.306 "hdgst": false, 00:22:17.306 "ddgst": false 00:22:17.306 }, 00:22:17.306 "method": "bdev_nvme_attach_controller" 00:22:17.306 }' 00:22:17.565 [2024-11-06 12:28:48.940606] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:17.565 [2024-11-06 12:28:48.940651] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid196709 ] 00:22:17.565 [2024-11-06 12:28:49.023815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.565 [2024-11-06 12:28:49.090637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.565 [2024-11-06 12:28:49.090742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.565 [2024-11-06 12:28:49.090743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.824 I/O targets: 00:22:17.824 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:17.824 00:22:17.824 00:22:17.824 CUnit - A unit testing framework for C - Version 2.1-3 00:22:17.824 http://cunit.sourceforge.net/ 00:22:17.824 00:22:17.824 00:22:17.824 Suite: bdevio tests on: Nvme1n1 00:22:18.083 Test: blockdev write read block ...passed 00:22:18.083 Test: blockdev write zeroes read block ...passed 00:22:18.083 Test: blockdev write zeroes read no split ...passed 00:22:18.083 Test: blockdev write zeroes read split ...passed 00:22:18.083 Test: blockdev write zeroes read split partial ...passed 00:22:18.083 Test: blockdev reset ...[2024-11-06 12:28:49.557787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:18.083 [2024-11-06 12:28:49.557869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399ef0 (9): Bad file descriptor 00:22:18.083 [2024-11-06 12:28:49.656187] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:18.083 passed 00:22:18.083 Test: blockdev write read 8 blocks ...passed 00:22:18.083 Test: blockdev write read size > 128k ...passed 00:22:18.083 Test: blockdev write read invalid size ...passed 00:22:18.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.341 Test: blockdev write read max offset ...passed 00:22:18.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.341 Test: blockdev writev readv 8 blocks ...passed 00:22:18.341 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.341 Test: blockdev writev readv block ...passed 00:22:18.341 Test: blockdev writev readv size > 128k ...passed 00:22:18.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.341 Test: blockdev comparev and writev ...[2024-11-06 12:28:49.866286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.866327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.866591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.866612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.866854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.866873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.866880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.341 [2024-11-06 12:28:49.867116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.341 [2024-11-06 12:28:49.867125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.342 [2024-11-06 12:28:49.867135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.342 [2024-11-06 12:28:49.867141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.342 passed 00:22:18.342 Test: blockdev nvme passthru rw ...passed 00:22:18.342 Test: blockdev nvme passthru vendor specific ...[2024-11-06 12:28:49.948778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.342 [2024-11-06 12:28:49.948794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.342 [2024-11-06 12:28:49.948908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.342 [2024-11-06 12:28:49.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.342 [2024-11-06 12:28:49.949037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.342 [2024-11-06 12:28:49.949045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.342 [2024-11-06 12:28:49.949148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.342 [2024-11-06 12:28:49.949156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.342 passed 00:22:18.599 Test: blockdev nvme admin passthru ...passed 00:22:18.599 Test: blockdev copy ...passed 00:22:18.599 00:22:18.599 Run Summary: Type Total Ran Passed Failed Inactive 00:22:18.599 suites 1 1 n/a 0 0 00:22:18.599 tests 23 23 23 0 0 00:22:18.599 asserts 152 152 152 0 n/a 00:22:18.599 00:22:18.599 Elapsed time = 1.221 seconds 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.857 rmmod nvme_tcp 00:22:18.857 rmmod nvme_fabrics 00:22:18.857 rmmod nvme_keyring 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 196659 ']' 00:22:18.857 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 196659 00:22:18.858 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 196659 ']' 00:22:18.858 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 196659 00:22:18.858 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:18.858 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:18.858 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 196659 00:22:19.116 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:19.116 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:19.116 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 196659' 00:22:19.116 killing process with pid 196659 00:22:19.116 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 196659 00:22:19.116 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 196659 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.375 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.280 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.280 00:22:21.280 real 0m10.194s 00:22:21.280 user 0m12.468s 00:22:21.280 sys 0m5.238s 00:22:21.280 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.280 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.280 ************************************ 00:22:21.280 END TEST nvmf_bdevio_no_huge 00:22:21.280 ************************************ 00:22:21.540 12:28:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.540 12:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:21.540 12:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.540 12:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:21.540 ************************************ 00:22:21.540 START TEST nvmf_tls 00:22:21.540 ************************************ 00:22:21.540 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.540 * Looking for test storage... 00:22:21.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.540 --rc genhtml_branch_coverage=1 00:22:21.540 --rc genhtml_function_coverage=1 00:22:21.540 --rc genhtml_legend=1 00:22:21.540 --rc geninfo_all_blocks=1 00:22:21.540 --rc geninfo_unexecuted_blocks=1 00:22:21.540 00:22:21.540 ' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.540 --rc genhtml_branch_coverage=1 00:22:21.540 --rc genhtml_function_coverage=1 00:22:21.540 --rc genhtml_legend=1 00:22:21.540 --rc geninfo_all_blocks=1 00:22:21.540 --rc geninfo_unexecuted_blocks=1 00:22:21.540 00:22:21.540 ' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.540 --rc genhtml_branch_coverage=1 00:22:21.540 --rc genhtml_function_coverage=1 00:22:21.540 --rc genhtml_legend=1 00:22:21.540 --rc geninfo_all_blocks=1 00:22:21.540 --rc geninfo_unexecuted_blocks=1 00:22:21.540 00:22:21.540 ' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.540 --rc genhtml_branch_coverage=1 00:22:21.540 --rc genhtml_function_coverage=1 00:22:21.540 --rc genhtml_legend=1 00:22:21.540 --rc geninfo_all_blocks=1 00:22:21.540 --rc geninfo_unexecuted_blocks=1 00:22:21.540 00:22:21.540 ' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.540 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.541 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.799 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.799 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.799 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.799 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:27.071 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:27.071 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:27.071 Found net devices under 0000:af:00.0: cvl_0_0 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:27.071 Found net devices under 0000:af:00.1: cvl_0_1 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.071 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.072 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:22:27.331 00:22:27.331 --- 10.0.0.2 ping statistics --- 00:22:27.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.331 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:27.331 00:22:27.331 --- 10.0.0.1 ping statistics --- 00:22:27.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.331 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=200703 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 200703 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 200703 ']' 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.331 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.331 [2024-11-06 12:28:58.840762] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:27.331 [2024-11-06 12:28:58.840820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.331 [2024-11-06 12:28:58.913447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.590 [2024-11-06 12:28:58.953116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.590 [2024-11-06 12:28:58.953146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.590 [2024-11-06 12:28:58.953153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.590 [2024-11-06 12:28:58.953158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.590 [2024-11-06 12:28:58.953163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.590 [2024-11-06 12:28:58.953728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:27.590 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:27.849 true 00:22:27.849 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.849 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:28.108 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:28.108 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:28.108 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:28.366 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.366 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:28.625 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:28.625 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:28.625 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:28.883 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.883 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:29.142 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:29.142 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:29.142 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.142 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:29.401 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:29.401 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:29.401 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:29.970 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.970 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:29.970 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:29.970 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:29.970 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:30.229 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.229 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:30.487 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Q6dnfAfZNh 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.CqiUGnuqbh 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Q6dnfAfZNh 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.CqiUGnuqbh 00:22:30.746 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:31.006 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:31.265 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Q6dnfAfZNh 00:22:31.265 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q6dnfAfZNh 00:22:31.265 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.524 [2024-11-06 12:29:03.078656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.524 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:31.782 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.041 [2024-11-06 12:29:03.599986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.041 [2024-11-06 12:29:03.600214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.041 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.300 malloc0 00:22:32.300 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.558 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q6dnfAfZNh 00:22:32.817 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.076 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Q6dnfAfZNh 00:22:45.286 Initializing NVMe Controllers 00:22:45.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.286 Initialization complete. Launching workers. 00:22:45.286 ======================================================== 00:22:45.286 Latency(us) 00:22:45.286 Device Information : IOPS MiB/s Average min max 00:22:45.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17783.82 69.47 3598.40 1424.40 43900.73 00:22:45.286 ======================================================== 00:22:45.286 Total : 17783.82 69.47 3598.40 1424.40 43900.73 00:22:45.286 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6dnfAfZNh 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q6dnfAfZNh 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=203491 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 203491 /var/tmp/bdevperf.sock 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 203491 ']' 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.286 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.286 [2024-11-06 12:29:14.817373] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:45.286 [2024-11-06 12:29:14.817434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203491 ] 00:22:45.286 [2024-11-06 12:29:14.884032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.286 [2024-11-06 12:29:14.925108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.286 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.286 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:45.286 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6dnfAfZNh 00:22:45.286 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.286 [2024-11-06 12:29:15.484383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.286 TLSTESTn1 00:22:45.286 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:45.286 Running I/O for 10 seconds... 00:22:46.223 5885.00 IOPS, 22.99 MiB/s [2024-11-06T11:29:18.774Z] 5834.50 IOPS, 22.79 MiB/s [2024-11-06T11:29:19.711Z] 5939.67 IOPS, 23.20 MiB/s [2024-11-06T11:29:21.087Z] 5931.00 IOPS, 23.17 MiB/s [2024-11-06T11:29:22.024Z] 5894.00 IOPS, 23.02 MiB/s [2024-11-06T11:29:22.961Z] 5924.00 IOPS, 23.14 MiB/s [2024-11-06T11:29:23.897Z] 5926.43 IOPS, 23.15 MiB/s [2024-11-06T11:29:24.835Z] 5938.75 IOPS, 23.20 MiB/s [2024-11-06T11:29:25.773Z] 5941.22 IOPS, 23.21 MiB/s [2024-11-06T11:29:25.773Z] 5952.30 IOPS, 23.25 MiB/s 00:22:54.158 Latency(us) 00:22:54.158 [2024-11-06T11:29:25.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.158 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.158 Verification LBA range: start 0x0 length 0x2000 00:22:54.158 TLSTESTn1 : 10.01 5956.98 23.27 0.00 0.00 21456.13 5779.08 25022.84 00:22:54.158 [2024-11-06T11:29:25.773Z] =================================================================================================================== 00:22:54.158 [2024-11-06T11:29:25.773Z] Total : 5956.98 23.27 0.00 0.00 21456.13 5779.08 25022.84 00:22:54.158 { 00:22:54.158 "results": [ 00:22:54.158 { 00:22:54.158 "job": "TLSTESTn1", 00:22:54.158 "core_mask": "0x4", 00:22:54.158 "workload": "verify", 00:22:54.158 "status": "finished", 00:22:54.158 "verify_range": { 00:22:54.158 "start": 0, 00:22:54.158 "length": 8192 00:22:54.158 }, 00:22:54.158 "queue_depth": 128, 00:22:54.158 "io_size": 4096, 00:22:54.158 "runtime": 10.01346, 00:22:54.158 "iops": 5956.981902359425, 00:22:54.158 "mibps": 23.269460556091502, 00:22:54.158 "io_failed": 0, 00:22:54.158 "io_timeout": 0, 00:22:54.158 "avg_latency_us": 21456.125335243465, 00:22:54.158 "min_latency_us": 5779.083636363636, 00:22:54.158 "max_latency_us": 25022.836363636365 00:22:54.158 } 00:22:54.158 ], 00:22:54.158 "core_count": 1 00:22:54.158 } 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 203491 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 203491 ']' 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 203491 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:54.158 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 203491 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 203491' 00:22:54.418 killing process with pid 203491 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 203491 00:22:54.418 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.418 00:22:54.418 Latency(us) 00:22:54.418 [2024-11-06T11:29:26.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.418 [2024-11-06T11:29:26.033Z] =================================================================================================================== 00:22:54.418 [2024-11-06T11:29:26.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 203491 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CqiUGnuqbh 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CqiUGnuqbh 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CqiUGnuqbh 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CqiUGnuqbh 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=205484 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 205484 /var/tmp/bdevperf.sock 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 205484 ']' 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.418 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.418 [2024-11-06 12:29:26.001161] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:54.418 [2024-11-06 12:29:26.001223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205484 ] 00:22:54.678 [2024-11-06 12:29:26.066276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.678 [2024-11-06 12:29:26.103319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.678 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:54.678 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:54.678 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CqiUGnuqbh 00:22:54.936 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.196 [2024-11-06 12:29:26.684792] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.196 [2024-11-06 12:29:26.696141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.196 [2024-11-06 12:29:26.697081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1837660 (107): Transport endpoint is not connected 00:22:55.196 [2024-11-06 12:29:26.698075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1837660 (9): Bad file descriptor 00:22:55.196 [2024-11-06 12:29:26.699077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:55.196 [2024-11-06 12:29:26.699086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.196 [2024-11-06 12:29:26.699092] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:55.196 [2024-11-06 12:29:26.699101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:55.196 request: 00:22:55.196 { 00:22:55.196 "name": "TLSTEST", 00:22:55.196 "trtype": "tcp", 00:22:55.196 "traddr": "10.0.0.2", 00:22:55.196 "adrfam": "ipv4", 00:22:55.196 "trsvcid": "4420", 00:22:55.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.196 "prchk_reftag": false, 00:22:55.196 "prchk_guard": false, 00:22:55.196 "hdgst": false, 00:22:55.196 "ddgst": false, 00:22:55.196 "psk": "key0", 00:22:55.196 "allow_unrecognized_csi": false, 00:22:55.196 "method": "bdev_nvme_attach_controller", 00:22:55.196 "req_id": 1 00:22:55.196 } 00:22:55.196 Got JSON-RPC error response 00:22:55.196 response: 00:22:55.196 { 00:22:55.196 "code": -5, 00:22:55.196 "message": "Input/output error" 00:22:55.196 } 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 205484 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 205484 ']' 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 205484 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 205484 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 205484' 00:22:55.196 killing process with pid 205484 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 205484 00:22:55.196 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.196 00:22:55.196 Latency(us) 00:22:55.196 [2024-11-06T11:29:26.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.196 [2024-11-06T11:29:26.811Z] =================================================================================================================== 00:22:55.196 [2024-11-06T11:29:26.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.196 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 205484 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q6dnfAfZNh 00:22:55.455 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q6dnfAfZNh 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q6dnfAfZNh 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q6dnfAfZNh 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=205702 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 205702 /var/tmp/bdevperf.sock 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 205702 ']' 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:55.456 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.456 [2024-11-06 12:29:26.989915] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:55.456 [2024-11-06 12:29:26.989981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205702 ] 00:22:55.456 [2024-11-06 12:29:27.056784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.715 [2024-11-06 12:29:27.094083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.715 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.715 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:55.715 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6dnfAfZNh 00:22:55.973 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:56.232 [2024-11-06 12:29:27.735848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.232 [2024-11-06 12:29:27.740501] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:56.232 [2024-11-06 12:29:27.740523] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:56.232 [2024-11-06 12:29:27.740544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:56.232 [2024-11-06 12:29:27.741231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf96660 (107): Transport endpoint is not connected 00:22:56.232 [2024-11-06 12:29:27.742225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf96660 (9): Bad file descriptor 00:22:56.232 [2024-11-06 12:29:27.743225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:56.233 [2024-11-06 12:29:27.743234] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:56.233 [2024-11-06 12:29:27.743241] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:56.233 [2024-11-06 12:29:27.743252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:56.233 request: 00:22:56.233 { 00:22:56.233 "name": "TLSTEST", 00:22:56.233 "trtype": "tcp", 00:22:56.233 "traddr": "10.0.0.2", 00:22:56.233 "adrfam": "ipv4", 00:22:56.233 "trsvcid": "4420", 00:22:56.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.233 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:56.233 "prchk_reftag": false, 00:22:56.233 "prchk_guard": false, 00:22:56.233 "hdgst": false, 00:22:56.233 "ddgst": false, 00:22:56.233 "psk": "key0", 00:22:56.233 "allow_unrecognized_csi": false, 00:22:56.233 "method": "bdev_nvme_attach_controller", 00:22:56.233 "req_id": 1 00:22:56.233 } 00:22:56.233 Got JSON-RPC error response 00:22:56.233 response: 00:22:56.233 { 00:22:56.233 "code": -5, 00:22:56.233 "message": "Input/output error" 00:22:56.233 } 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 205702 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 205702 ']' 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 205702 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 205702 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 205702' 00:22:56.233 killing process with pid 205702 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 205702 00:22:56.233 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.233 00:22:56.233 Latency(us) 00:22:56.233 [2024-11-06T11:29:27.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.233 [2024-11-06T11:29:27.848Z] =================================================================================================================== 00:22:56.233 [2024-11-06T11:29:27.848Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.233 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 205702 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6dnfAfZNh 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6dnfAfZNh 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q6dnfAfZNh 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q6dnfAfZNh 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=205768 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 205768 /var/tmp/bdevperf.sock 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 205768 ']' 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:56.492 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.492 [2024-11-06 12:29:28.023210] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:56.492 [2024-11-06 12:29:28.023275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205768 ] 00:22:56.492 [2024-11-06 12:29:28.091585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.751 [2024-11-06 12:29:28.128697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.751 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.751 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:56.751 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q6dnfAfZNh 00:22:57.010 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.270 [2024-11-06 12:29:28.766497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.270 [2024-11-06 12:29:28.771273] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:57.270 [2024-11-06 12:29:28.771293] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:57.270 [2024-11-06 12:29:28.771314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.270 [2024-11-06 12:29:28.771790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2249660 (107): Transport endpoint is not connected 00:22:57.270 [2024-11-06 12:29:28.772784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2249660 (9): Bad file descriptor 00:22:57.270 [2024-11-06 12:29:28.773785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:57.270 [2024-11-06 12:29:28.773794] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:57.270 [2024-11-06 12:29:28.773800] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:57.270 [2024-11-06 12:29:28.773809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:57.270 request: 00:22:57.270 { 00:22:57.270 "name": "TLSTEST", 00:22:57.270 "trtype": "tcp", 00:22:57.270 "traddr": "10.0.0.2", 00:22:57.270 "adrfam": "ipv4", 00:22:57.270 "trsvcid": "4420", 00:22:57.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.270 "prchk_reftag": false, 00:22:57.270 "prchk_guard": false, 00:22:57.270 "hdgst": false, 00:22:57.270 "ddgst": false, 00:22:57.270 "psk": "key0", 00:22:57.270 "allow_unrecognized_csi": false, 00:22:57.270 "method": "bdev_nvme_attach_controller", 00:22:57.270 "req_id": 1 00:22:57.270 } 00:22:57.270 Got JSON-RPC error response 00:22:57.270 response: 00:22:57.270 { 00:22:57.270 "code": -5, 00:22:57.270 "message": "Input/output error" 00:22:57.270 } 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 205768 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 205768 ']' 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 205768 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 205768 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 205768' 00:22:57.270 killing process with pid 205768 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 205768 00:22:57.270 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.270 00:22:57.270 Latency(us) 00:22:57.270 [2024-11-06T11:29:28.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.270 [2024-11-06T11:29:28.885Z] =================================================================================================================== 00:22:57.270 [2024-11-06T11:29:28.885Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.270 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 205768 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=206034 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 206034 /var/tmp/bdevperf.sock 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 206034 ']' 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.530 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 [2024-11-06 12:29:29.048988] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:57.530 [2024-11-06 12:29:29.049056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206034 ] 00:22:57.530 [2024-11-06 12:29:29.114867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.789 [2024-11-06 12:29:29.150829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.789 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.789 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:57.789 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:58.048 [2024-11-06 12:29:29.519915] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:58.048 [2024-11-06 12:29:29.519948] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:58.048 request: 00:22:58.048 { 00:22:58.048 "name": "key0", 00:22:58.048 "path": "", 00:22:58.048 "method": "keyring_file_add_key", 00:22:58.048 "req_id": 1 00:22:58.048 } 00:22:58.048 Got JSON-RPC error response 00:22:58.048 response: 00:22:58.048 { 00:22:58.048 "code": -1, 00:22:58.048 "message": "Operation not permitted" 00:22:58.048 } 00:22:58.048 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.307 [2024-11-06 12:29:29.784680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.307 [2024-11-06 12:29:29.784706] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:58.307 request: 00:22:58.307 { 00:22:58.307 "name": "TLSTEST", 00:22:58.307 "trtype": "tcp", 00:22:58.307 "traddr": "10.0.0.2", 00:22:58.307 "adrfam": "ipv4", 00:22:58.307 "trsvcid": "4420", 00:22:58.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.307 "prchk_reftag": false, 00:22:58.307 "prchk_guard": false, 00:22:58.307 "hdgst": false, 00:22:58.307 "ddgst": false, 00:22:58.307 "psk": "key0", 00:22:58.307 "allow_unrecognized_csi": false, 00:22:58.307 "method": "bdev_nvme_attach_controller", 00:22:58.307 "req_id": 1 00:22:58.307 } 00:22:58.307 Got JSON-RPC error response 00:22:58.307 response: 00:22:58.307 { 00:22:58.307 "code": -126, 00:22:58.307 "message": "Required key not available" 00:22:58.307 } 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 206034 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 206034 ']' 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 206034 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 206034 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 206034' 00:22:58.307 killing process with pid 206034 00:22:58.307 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 206034 00:22:58.307 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.307 00:22:58.307 Latency(us) 00:22:58.307 [2024-11-06T11:29:29.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.307 [2024-11-06T11:29:29.923Z] =================================================================================================================== 00:22:58.308 [2024-11-06T11:29:29.923Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.308 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 206034 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 200703 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 200703 ']' 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 200703 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 200703 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 200703' 00:22:58.566 killing process with pid 200703 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 200703 00:22:58.566 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 200703 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Ipb4RD1IwM 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Ipb4RD1IwM 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=206314 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 206314 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 206314 ']' 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:58.826 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.826 [2024-11-06 12:29:30.355587] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:22:58.826 [2024-11-06 12:29:30.355653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.826 [2024-11-06 12:29:30.428844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.085 [2024-11-06 12:29:30.463329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.085 [2024-11-06 12:29:30.463360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.085 [2024-11-06 12:29:30.463366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.085 [2024-11-06 12:29:30.463372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.085 [2024-11-06 12:29:30.463377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.085 [2024-11-06 12:29:30.463934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.085 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:22:59.086 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ipb4RD1IwM 00:22:59.086 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:59.344 [2024-11-06 12:29:30.869911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.344 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:59.603 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.862 [2024-11-06 12:29:31.415341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.862 [2024-11-06 12:29:31.415551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.862 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:00.121 malloc0 00:23:00.121 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:00.380 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:00.639 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ipb4RD1IwM 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ipb4RD1IwM 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=206661 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.897 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 206661 /var/tmp/bdevperf.sock 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 206661 ']' 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:00.898 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.157 [2024-11-06 12:29:32.545128] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:01.157 [2024-11-06 12:29:32.545189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206661 ] 00:23:01.157 [2024-11-06 12:29:32.610652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.157 [2024-11-06 12:29:32.648077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.157 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.157 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:01.157 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:01.723 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.723 [2024-11-06 12:29:33.281818] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.982 TLSTESTn1 00:23:01.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:01.982 Running I/O for 10 seconds... 00:23:04.296 3782.00 IOPS, 14.77 MiB/s [2024-11-06T11:29:36.848Z] 4015.50 IOPS, 15.69 MiB/s [2024-11-06T11:29:37.784Z] 4044.33 IOPS, 15.80 MiB/s [2024-11-06T11:29:38.721Z] 4040.00 IOPS, 15.78 MiB/s [2024-11-06T11:29:39.657Z] 4063.00 IOPS, 15.87 MiB/s [2024-11-06T11:29:40.593Z] 4086.00 IOPS, 15.96 MiB/s [2024-11-06T11:29:41.530Z] 4084.29 IOPS, 15.95 MiB/s [2024-11-06T11:29:42.906Z] 4102.25 IOPS, 16.02 MiB/s [2024-11-06T11:29:43.844Z] 4082.44 IOPS, 15.95 MiB/s [2024-11-06T11:29:43.844Z] 4056.30 IOPS, 15.84 MiB/s 00:23:12.229 Latency(us) 00:23:12.229 [2024-11-06T11:29:43.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.229 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:12.229 Verification LBA range: start 0x0 length 0x2000 00:23:12.229 TLSTESTn1 : 10.03 4057.51 15.85 0.00 0.00 31490.27 4557.73 52905.43 00:23:12.229 [2024-11-06T11:29:43.844Z] =================================================================================================================== 00:23:12.229 [2024-11-06T11:29:43.844Z] Total : 4057.51 15.85 0.00 0.00 31490.27 4557.73 52905.43 00:23:12.229 { 00:23:12.229 "results": [ 00:23:12.229 { 00:23:12.229 "job": "TLSTESTn1", 00:23:12.229 "core_mask": "0x4", 00:23:12.229 "workload": "verify", 00:23:12.229 "status": "finished", 00:23:12.229 "verify_range": { 00:23:12.229 "start": 0, 00:23:12.229 "length": 8192 00:23:12.229 }, 00:23:12.229 "queue_depth": 128, 00:23:12.229 "io_size": 4096, 00:23:12.229 "runtime": 10.028559, 00:23:12.229 "iops": 4057.5121510478225, 00:23:12.229 "mibps": 15.849656840030557, 00:23:12.229 "io_failed": 0, 00:23:12.229 "io_timeout": 0, 00:23:12.229 "avg_latency_us": 31490.272675530214, 00:23:12.229 "min_latency_us": 4557.730909090909, 00:23:12.229 "max_latency_us": 52905.42545454545 00:23:12.229 } 00:23:12.229 ], 00:23:12.229 "core_count": 1 00:23:12.229 } 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 206661 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 206661 ']' 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 206661 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 206661 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 206661' 00:23:12.229 killing process with pid 206661 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 206661 00:23:12.229 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.229 00:23:12.229 Latency(us) 00:23:12.229 [2024-11-06T11:29:43.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.229 [2024-11-06T11:29:43.844Z] =================================================================================================================== 00:23:12.229 [2024-11-06T11:29:43.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 206661 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Ipb4RD1IwM 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ipb4RD1IwM 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ipb4RD1IwM 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ipb4RD1IwM 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ipb4RD1IwM 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=208704 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 208704 /var/tmp/bdevperf.sock 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 208704 ']' 00:23:12.229 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.230 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:12.230 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.230 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:12.230 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.230 [2024-11-06 12:29:43.815987] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:12.230 [2024-11-06 12:29:43.816052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid208704 ] 00:23:12.489 [2024-11-06 12:29:43.882038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.489 [2024-11-06 12:29:43.917271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.489 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:12.489 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:12.489 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:12.748 [2024-11-06 12:29:44.298348] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ipb4RD1IwM': 0100666 00:23:12.748 [2024-11-06 12:29:44.298378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:12.748 request: 00:23:12.748 { 00:23:12.748 "name": "key0", 00:23:12.748 "path": "/tmp/tmp.Ipb4RD1IwM", 00:23:12.748 "method": "keyring_file_add_key", 00:23:12.748 "req_id": 1 00:23:12.748 } 00:23:12.748 Got JSON-RPC error response 00:23:12.748 response: 00:23:12.748 { 00:23:12.748 "code": -1, 00:23:12.748 "message": "Operation not permitted" 00:23:12.748 } 00:23:12.748 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.007 [2024-11-06 12:29:44.575132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.007 [2024-11-06 12:29:44.575158] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:13.007 request: 00:23:13.007 { 00:23:13.007 "name": "TLSTEST", 00:23:13.007 "trtype": "tcp", 00:23:13.007 "traddr": "10.0.0.2", 00:23:13.007 "adrfam": "ipv4", 00:23:13.007 "trsvcid": "4420", 00:23:13.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.007 "prchk_reftag": false, 00:23:13.007 "prchk_guard": false, 00:23:13.007 "hdgst": false, 00:23:13.007 "ddgst": false, 00:23:13.007 "psk": "key0", 00:23:13.007 "allow_unrecognized_csi": false, 00:23:13.007 "method": "bdev_nvme_attach_controller", 00:23:13.007 "req_id": 1 00:23:13.007 } 00:23:13.007 Got JSON-RPC error response 00:23:13.007 response: 00:23:13.007 { 00:23:13.007 "code": -126, 00:23:13.007 "message": "Required key not available" 00:23:13.007 } 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 208704 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 208704 ']' 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 208704 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.007 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 208704 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 208704' 00:23:13.267 killing process with pid 208704 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 208704 00:23:13.267 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.267 00:23:13.267 Latency(us) 00:23:13.267 [2024-11-06T11:29:44.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.267 [2024-11-06T11:29:44.882Z] =================================================================================================================== 00:23:13.267 [2024-11-06T11:29:44.882Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 208704 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 206314 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 206314 ']' 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 206314 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 206314 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 206314' 00:23:13.267 killing process with pid 206314 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 206314 00:23:13.267 12:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 206314 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=208973 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 208973 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 208973 ']' 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.527 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.527 [2024-11-06 12:29:45.087077] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:13.527 [2024-11-06 12:29:45.087139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.786 [2024-11-06 12:29:45.160185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.786 [2024-11-06 12:29:45.195247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.786 [2024-11-06 12:29:45.195281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.786 [2024-11-06 12:29:45.195287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.786 [2024-11-06 12:29:45.195292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.786 [2024-11-06 12:29:45.195297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.786 [2024-11-06 12:29:45.195894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ipb4RD1IwM 00:23:13.786 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.045 [2024-11-06 12:29:45.600611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.045 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.304 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.563 [2024-11-06 12:29:46.137998] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.563 [2024-11-06 12:29:46.138215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.563 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.822 malloc0 00:23:14.822 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.390 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:15.390 [2024-11-06 12:29:46.951742] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ipb4RD1IwM': 0100666 00:23:15.390 [2024-11-06 12:29:46.951764] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:15.390 request: 00:23:15.390 { 00:23:15.390 "name": "key0", 00:23:15.390 "path": "/tmp/tmp.Ipb4RD1IwM", 00:23:15.390 "method": "keyring_file_add_key", 00:23:15.390 "req_id": 1 00:23:15.390 } 00:23:15.390 Got JSON-RPC error response 00:23:15.390 response: 00:23:15.390 { 00:23:15.390 "code": -1, 00:23:15.390 "message": "Operation not permitted" 00:23:15.390 } 00:23:15.390 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.649 [2024-11-06 12:29:47.220462] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:15.649 [2024-11-06 12:29:47.220494] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:15.649 request: 00:23:15.649 { 00:23:15.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.650 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.650 "psk": "key0", 00:23:15.650 "method": "nvmf_subsystem_add_host", 00:23:15.650 "req_id": 1 00:23:15.650 } 00:23:15.650 Got JSON-RPC error response 00:23:15.650 response: 00:23:15.650 { 00:23:15.650 "code": -32603, 00:23:15.650 "message": "Internal error" 00:23:15.650 } 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 208973 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 208973 ']' 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 208973 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:15.650 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 208973 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 208973' 00:23:15.909 killing process with pid 208973 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 208973 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 208973 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Ipb4RD1IwM 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=209392 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 209392 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 209392 ']' 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.909 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.168 [2024-11-06 12:29:47.530980] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:16.168 [2024-11-06 12:29:47.531043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.168 [2024-11-06 12:29:47.603066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.168 [2024-11-06 12:29:47.639042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.168 [2024-11-06 12:29:47.639077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.168 [2024-11-06 12:29:47.639083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.168 [2024-11-06 12:29:47.639089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.168 [2024-11-06 12:29:47.639093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.168 [2024-11-06 12:29:47.639636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.168 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.168 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:16.168 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.168 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.168 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.427 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.427 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:23:16.427 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ipb4RD1IwM 00:23:16.427 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.427 [2024-11-06 12:29:48.041422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.686 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.944 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:17.202 [2024-11-06 12:29:48.582810] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.202 [2024-11-06 12:29:48.583006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.202 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:17.461 malloc0 00:23:17.461 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.720 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:17.979 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=209827 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 209827 /var/tmp/bdevperf.sock 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 209827 ']' 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.237 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.237 [2024-11-06 12:29:49.716258] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:18.237 [2024-11-06 12:29:49.716322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209827 ] 00:23:18.237 [2024-11-06 12:29:49.781890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.237 [2024-11-06 12:29:49.819755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.496 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:18.496 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:18.496 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:18.754 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.013 [2024-11-06 12:29:50.461583] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.013 TLSTESTn1 00:23:19.013 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:19.272 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:19.272 "subsystems": [ 00:23:19.272 { 00:23:19.272 "subsystem": "keyring", 00:23:19.272 "config": [ 00:23:19.272 { 00:23:19.272 "method": "keyring_file_add_key", 00:23:19.272 "params": { 00:23:19.272 "name": "key0", 00:23:19.272 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:19.272 } 00:23:19.272 } 00:23:19.272 ] 00:23:19.272 }, 00:23:19.272 { 00:23:19.272 "subsystem": "iobuf", 00:23:19.272 "config": [ 00:23:19.272 { 00:23:19.272 "method": "iobuf_set_options", 00:23:19.272 "params": { 00:23:19.272 "small_pool_count": 8192, 00:23:19.272 "large_pool_count": 1024, 00:23:19.272 "small_bufsize": 8192, 00:23:19.272 "large_bufsize": 135168, 00:23:19.272 "enable_numa": false 00:23:19.272 } 00:23:19.272 } 00:23:19.272 ] 00:23:19.272 }, 00:23:19.272 { 00:23:19.272 "subsystem": "sock", 00:23:19.272 "config": [ 00:23:19.272 { 00:23:19.272 "method": "sock_set_default_impl", 00:23:19.272 "params": { 00:23:19.272 "impl_name": "posix" 00:23:19.272 } 00:23:19.272 }, 00:23:19.273 { 00:23:19.273 "method": "sock_impl_set_options", 00:23:19.273 "params": { 00:23:19.273 "impl_name": "ssl", 00:23:19.273 "recv_buf_size": 4096, 00:23:19.273 "send_buf_size": 4096, 00:23:19.273 "enable_recv_pipe": true, 00:23:19.273 "enable_quickack": false, 00:23:19.273 "enable_placement_id": 0, 00:23:19.273 "enable_zerocopy_send_server": true, 00:23:19.273 "enable_zerocopy_send_client": false, 00:23:19.273 "zerocopy_threshold": 0, 00:23:19.273 "tls_version": 0, 00:23:19.273 "enable_ktls": false 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "sock_impl_set_options", 00:23:19.273 "params": { 00:23:19.273 "impl_name": "posix", 00:23:19.273 "recv_buf_size": 2097152, 00:23:19.273 "send_buf_size": 2097152, 00:23:19.273 "enable_recv_pipe": true, 00:23:19.273 "enable_quickack": false, 00:23:19.273 "enable_placement_id": 0, 00:23:19.273 "enable_zerocopy_send_server": true, 00:23:19.273 "enable_zerocopy_send_client": false, 00:23:19.273 "zerocopy_threshold": 0, 00:23:19.273 "tls_version": 0, 00:23:19.273 "enable_ktls": false 00:23:19.273 } 00:23:19.273 } 00:23:19.273 ] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "vmd", 00:23:19.273 "config": [] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "accel", 00:23:19.273 "config": [ 00:23:19.273 { 00:23:19.273 "method": "accel_set_options", 00:23:19.273 "params": { 00:23:19.273 "small_cache_size": 128, 00:23:19.273 "large_cache_size": 16, 00:23:19.273 "task_count": 2048, 00:23:19.273 "sequence_count": 2048, 00:23:19.273 "buf_count": 2048 00:23:19.273 } 00:23:19.273 } 00:23:19.273 ] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "bdev", 00:23:19.273 "config": [ 00:23:19.273 { 00:23:19.273 "method": "bdev_set_options", 00:23:19.273 "params": { 00:23:19.273 "bdev_io_pool_size": 65535, 00:23:19.273 "bdev_io_cache_size": 256, 00:23:19.273 "bdev_auto_examine": true, 00:23:19.273 "iobuf_small_cache_size": 128, 00:23:19.273 "iobuf_large_cache_size": 16 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_raid_set_options", 00:23:19.273 "params": { 00:23:19.273 "process_window_size_kb": 1024, 00:23:19.273 "process_max_bandwidth_mb_sec": 0 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_iscsi_set_options", 00:23:19.273 "params": { 00:23:19.273 "timeout_sec": 30 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_nvme_set_options", 00:23:19.273 "params": { 00:23:19.273 "action_on_timeout": "none", 00:23:19.273 "timeout_us": 0, 00:23:19.273 "timeout_admin_us": 0, 00:23:19.273 "keep_alive_timeout_ms": 10000, 00:23:19.273 "arbitration_burst": 0, 00:23:19.273 "low_priority_weight": 0, 00:23:19.273 "medium_priority_weight": 0, 00:23:19.273 "high_priority_weight": 0, 00:23:19.273 "nvme_adminq_poll_period_us": 10000, 00:23:19.273 "nvme_ioq_poll_period_us": 0, 00:23:19.273 "io_queue_requests": 0, 00:23:19.273 "delay_cmd_submit": true, 00:23:19.273 "transport_retry_count": 4, 00:23:19.273 "bdev_retry_count": 3, 00:23:19.273 "transport_ack_timeout": 0, 00:23:19.273 "ctrlr_loss_timeout_sec": 0, 00:23:19.273 "reconnect_delay_sec": 0, 00:23:19.273 "fast_io_fail_timeout_sec": 0, 00:23:19.273 "disable_auto_failback": false, 00:23:19.273 "generate_uuids": false, 00:23:19.273 "transport_tos": 0, 00:23:19.273 "nvme_error_stat": false, 00:23:19.273 "rdma_srq_size": 0, 00:23:19.273 "io_path_stat": false, 00:23:19.273 "allow_accel_sequence": false, 00:23:19.273 "rdma_max_cq_size": 0, 00:23:19.273 "rdma_cm_event_timeout_ms": 0, 00:23:19.273 "dhchap_digests": [ 00:23:19.273 "sha256", 00:23:19.273 "sha384", 00:23:19.273 "sha512" 00:23:19.273 ], 00:23:19.273 "dhchap_dhgroups": [ 00:23:19.273 "null", 00:23:19.273 "ffdhe2048", 00:23:19.273 "ffdhe3072", 00:23:19.273 "ffdhe4096", 00:23:19.273 "ffdhe6144", 00:23:19.273 "ffdhe8192" 00:23:19.273 ] 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_nvme_set_hotplug", 00:23:19.273 "params": { 00:23:19.273 "period_us": 100000, 00:23:19.273 "enable": false 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_malloc_create", 00:23:19.273 "params": { 00:23:19.273 "name": "malloc0", 00:23:19.273 "num_blocks": 8192, 00:23:19.273 "block_size": 4096, 00:23:19.273 "physical_block_size": 4096, 00:23:19.273 "uuid": "68e7d327-5a6c-4bc6-9d7f-8a328e816976", 00:23:19.273 "optimal_io_boundary": 0, 00:23:19.273 "md_size": 0, 00:23:19.273 "dif_type": 0, 00:23:19.273 "dif_is_head_of_md": false, 00:23:19.273 "dif_pi_format": 0 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "bdev_wait_for_examine" 00:23:19.273 } 00:23:19.273 ] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "nbd", 00:23:19.273 "config": [] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "scheduler", 00:23:19.273 "config": [ 00:23:19.273 { 00:23:19.273 "method": "framework_set_scheduler", 00:23:19.273 "params": { 00:23:19.273 "name": "static" 00:23:19.273 } 00:23:19.273 } 00:23:19.273 ] 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "subsystem": "nvmf", 00:23:19.273 "config": [ 00:23:19.273 { 00:23:19.273 "method": "nvmf_set_config", 00:23:19.273 "params": { 00:23:19.273 "discovery_filter": "match_any", 00:23:19.273 "admin_cmd_passthru": { 00:23:19.273 "identify_ctrlr": false 00:23:19.273 }, 00:23:19.273 "dhchap_digests": [ 00:23:19.273 "sha256", 00:23:19.273 "sha384", 00:23:19.273 "sha512" 00:23:19.273 ], 00:23:19.273 "dhchap_dhgroups": [ 00:23:19.273 "null", 00:23:19.273 "ffdhe2048", 00:23:19.273 "ffdhe3072", 00:23:19.273 "ffdhe4096", 00:23:19.273 "ffdhe6144", 00:23:19.273 "ffdhe8192" 00:23:19.273 ] 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "nvmf_set_max_subsystems", 00:23:19.273 "params": { 00:23:19.273 "max_subsystems": 1024 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "nvmf_set_crdt", 00:23:19.273 "params": { 00:23:19.273 "crdt1": 0, 00:23:19.273 "crdt2": 0, 00:23:19.273 "crdt3": 0 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "nvmf_create_transport", 00:23:19.273 "params": { 00:23:19.273 "trtype": "TCP", 00:23:19.273 "max_queue_depth": 128, 00:23:19.273 "max_io_qpairs_per_ctrlr": 127, 00:23:19.273 "in_capsule_data_size": 4096, 00:23:19.273 "max_io_size": 131072, 00:23:19.273 "io_unit_size": 131072, 00:23:19.273 "max_aq_depth": 128, 00:23:19.273 "num_shared_buffers": 511, 00:23:19.273 "buf_cache_size": 4294967295, 00:23:19.273 "dif_insert_or_strip": false, 00:23:19.273 "zcopy": false, 00:23:19.273 "c2h_success": false, 00:23:19.273 "sock_priority": 0, 00:23:19.273 "abort_timeout_sec": 1, 00:23:19.273 "ack_timeout": 0, 00:23:19.273 "data_wr_pool_size": 0 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "nvmf_create_subsystem", 00:23:19.273 "params": { 00:23:19.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.273 "allow_any_host": false, 00:23:19.273 "serial_number": "SPDK00000000000001", 00:23:19.273 "model_number": "SPDK bdev Controller", 00:23:19.273 "max_namespaces": 10, 00:23:19.273 "min_cntlid": 1, 00:23:19.273 "max_cntlid": 65519, 00:23:19.273 "ana_reporting": false 00:23:19.273 } 00:23:19.273 }, 00:23:19.273 { 00:23:19.273 "method": "nvmf_subsystem_add_host", 00:23:19.273 "params": { 00:23:19.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.273 "host": "nqn.2016-06.io.spdk:host1", 00:23:19.273 "psk": "key0" 00:23:19.274 } 00:23:19.274 }, 00:23:19.274 { 00:23:19.274 "method": "nvmf_subsystem_add_ns", 00:23:19.274 "params": { 00:23:19.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.274 "namespace": { 00:23:19.274 "nsid": 1, 00:23:19.274 "bdev_name": "malloc0", 00:23:19.274 "nguid": "68E7D3275A6C4BC69D7F8A328E816976", 00:23:19.274 "uuid": "68e7d327-5a6c-4bc6-9d7f-8a328e816976", 00:23:19.274 "no_auto_visible": false 00:23:19.274 } 00:23:19.274 } 00:23:19.274 }, 00:23:19.274 { 00:23:19.274 "method": "nvmf_subsystem_add_listener", 00:23:19.274 "params": { 00:23:19.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.274 "listen_address": { 00:23:19.274 "trtype": "TCP", 00:23:19.274 "adrfam": "IPv4", 00:23:19.274 "traddr": "10.0.0.2", 00:23:19.274 "trsvcid": "4420" 00:23:19.274 }, 00:23:19.274 "secure_channel": true 00:23:19.274 } 00:23:19.274 } 00:23:19.274 ] 00:23:19.274 } 00:23:19.274 ] 00:23:19.274 }' 00:23:19.274 12:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:19.533 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:19.533 "subsystems": [ 00:23:19.533 { 00:23:19.533 "subsystem": "keyring", 00:23:19.533 "config": [ 00:23:19.533 { 00:23:19.533 "method": "keyring_file_add_key", 00:23:19.533 "params": { 00:23:19.533 "name": "key0", 00:23:19.533 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:19.533 } 00:23:19.533 } 00:23:19.533 ] 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "subsystem": "iobuf", 00:23:19.533 "config": [ 00:23:19.533 { 00:23:19.533 "method": "iobuf_set_options", 00:23:19.533 "params": { 00:23:19.533 "small_pool_count": 8192, 00:23:19.533 "large_pool_count": 1024, 00:23:19.533 "small_bufsize": 8192, 00:23:19.533 "large_bufsize": 135168, 00:23:19.533 "enable_numa": false 00:23:19.533 } 00:23:19.533 } 00:23:19.533 ] 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "subsystem": "sock", 00:23:19.533 "config": [ 00:23:19.533 { 00:23:19.533 "method": "sock_set_default_impl", 00:23:19.533 "params": { 00:23:19.533 "impl_name": "posix" 00:23:19.533 } 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "method": "sock_impl_set_options", 00:23:19.533 "params": { 00:23:19.533 "impl_name": "ssl", 00:23:19.533 "recv_buf_size": 4096, 00:23:19.533 "send_buf_size": 4096, 00:23:19.533 "enable_recv_pipe": true, 00:23:19.533 "enable_quickack": false, 00:23:19.533 "enable_placement_id": 0, 00:23:19.533 "enable_zerocopy_send_server": true, 00:23:19.533 "enable_zerocopy_send_client": false, 00:23:19.533 "zerocopy_threshold": 0, 00:23:19.533 "tls_version": 0, 00:23:19.533 "enable_ktls": false 00:23:19.533 } 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "method": "sock_impl_set_options", 00:23:19.533 "params": { 00:23:19.533 "impl_name": "posix", 00:23:19.533 "recv_buf_size": 2097152, 00:23:19.533 "send_buf_size": 2097152, 00:23:19.533 "enable_recv_pipe": true, 00:23:19.533 "enable_quickack": false, 00:23:19.533 "enable_placement_id": 0, 00:23:19.533 "enable_zerocopy_send_server": true, 00:23:19.533 "enable_zerocopy_send_client": false, 00:23:19.533 "zerocopy_threshold": 0, 00:23:19.533 "tls_version": 0, 00:23:19.533 "enable_ktls": false 00:23:19.533 } 00:23:19.533 } 00:23:19.533 ] 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "subsystem": "vmd", 00:23:19.533 "config": [] 00:23:19.533 }, 00:23:19.533 { 00:23:19.533 "subsystem": "accel", 00:23:19.533 "config": [ 00:23:19.534 { 00:23:19.534 "method": "accel_set_options", 00:23:19.534 "params": { 00:23:19.534 "small_cache_size": 128, 00:23:19.534 "large_cache_size": 16, 00:23:19.534 "task_count": 2048, 00:23:19.534 "sequence_count": 2048, 00:23:19.534 "buf_count": 2048 00:23:19.534 } 00:23:19.534 } 00:23:19.534 ] 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "subsystem": "bdev", 00:23:19.534 "config": [ 00:23:19.534 { 00:23:19.534 "method": "bdev_set_options", 00:23:19.534 "params": { 00:23:19.534 "bdev_io_pool_size": 65535, 00:23:19.534 "bdev_io_cache_size": 256, 00:23:19.534 "bdev_auto_examine": true, 00:23:19.534 "iobuf_small_cache_size": 128, 00:23:19.534 "iobuf_large_cache_size": 16 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_raid_set_options", 00:23:19.534 "params": { 00:23:19.534 "process_window_size_kb": 1024, 00:23:19.534 "process_max_bandwidth_mb_sec": 0 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_iscsi_set_options", 00:23:19.534 "params": { 00:23:19.534 "timeout_sec": 30 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_nvme_set_options", 00:23:19.534 "params": { 00:23:19.534 "action_on_timeout": "none", 00:23:19.534 "timeout_us": 0, 00:23:19.534 "timeout_admin_us": 0, 00:23:19.534 "keep_alive_timeout_ms": 10000, 00:23:19.534 "arbitration_burst": 0, 00:23:19.534 "low_priority_weight": 0, 00:23:19.534 "medium_priority_weight": 0, 00:23:19.534 "high_priority_weight": 0, 00:23:19.534 "nvme_adminq_poll_period_us": 10000, 00:23:19.534 "nvme_ioq_poll_period_us": 0, 00:23:19.534 "io_queue_requests": 512, 00:23:19.534 "delay_cmd_submit": true, 00:23:19.534 "transport_retry_count": 4, 00:23:19.534 "bdev_retry_count": 3, 00:23:19.534 "transport_ack_timeout": 0, 00:23:19.534 "ctrlr_loss_timeout_sec": 0, 00:23:19.534 "reconnect_delay_sec": 0, 00:23:19.534 "fast_io_fail_timeout_sec": 0, 00:23:19.534 "disable_auto_failback": false, 00:23:19.534 "generate_uuids": false, 00:23:19.534 "transport_tos": 0, 00:23:19.534 "nvme_error_stat": false, 00:23:19.534 "rdma_srq_size": 0, 00:23:19.534 "io_path_stat": false, 00:23:19.534 "allow_accel_sequence": false, 00:23:19.534 "rdma_max_cq_size": 0, 00:23:19.534 "rdma_cm_event_timeout_ms": 0, 00:23:19.534 "dhchap_digests": [ 00:23:19.534 "sha256", 00:23:19.534 "sha384", 00:23:19.534 "sha512" 00:23:19.534 ], 00:23:19.534 "dhchap_dhgroups": [ 00:23:19.534 "null", 00:23:19.534 "ffdhe2048", 00:23:19.534 "ffdhe3072", 00:23:19.534 "ffdhe4096", 00:23:19.534 "ffdhe6144", 00:23:19.534 "ffdhe8192" 00:23:19.534 ] 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_nvme_attach_controller", 00:23:19.534 "params": { 00:23:19.534 "name": "TLSTEST", 00:23:19.534 "trtype": "TCP", 00:23:19.534 "adrfam": "IPv4", 00:23:19.534 "traddr": "10.0.0.2", 00:23:19.534 "trsvcid": "4420", 00:23:19.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.534 "prchk_reftag": false, 00:23:19.534 "prchk_guard": false, 00:23:19.534 "ctrlr_loss_timeout_sec": 0, 00:23:19.534 "reconnect_delay_sec": 0, 00:23:19.534 "fast_io_fail_timeout_sec": 0, 00:23:19.534 "psk": "key0", 00:23:19.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.534 "hdgst": false, 00:23:19.534 "ddgst": false, 00:23:19.534 "multipath": "multipath" 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_nvme_set_hotplug", 00:23:19.534 "params": { 00:23:19.534 "period_us": 100000, 00:23:19.534 "enable": false 00:23:19.534 } 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "method": "bdev_wait_for_examine" 00:23:19.534 } 00:23:19.534 ] 00:23:19.534 }, 00:23:19.534 { 00:23:19.534 "subsystem": "nbd", 00:23:19.534 "config": [] 00:23:19.534 } 00:23:19.534 ] 00:23:19.534 }' 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 209827 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 209827 ']' 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 209827 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:19.534 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 209827 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 209827' 00:23:19.793 killing process with pid 209827 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 209827 00:23:19.793 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.793 00:23:19.793 Latency(us) 00:23:19.793 [2024-11-06T11:29:51.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.793 [2024-11-06T11:29:51.408Z] =================================================================================================================== 00:23:19.793 [2024-11-06T11:29:51.408Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 209827 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 209392 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 209392 ']' 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 209392 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:19.793 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 209392 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 209392' 00:23:20.053 killing process with pid 209392 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 209392 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 209392 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.053 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:20.053 "subsystems": [ 00:23:20.053 { 00:23:20.053 "subsystem": "keyring", 00:23:20.053 "config": [ 00:23:20.053 { 00:23:20.053 "method": "keyring_file_add_key", 00:23:20.053 "params": { 00:23:20.053 "name": "key0", 00:23:20.053 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:20.053 } 00:23:20.053 } 00:23:20.053 ] 00:23:20.053 }, 00:23:20.053 { 00:23:20.053 "subsystem": "iobuf", 00:23:20.053 "config": [ 00:23:20.053 { 00:23:20.053 "method": "iobuf_set_options", 00:23:20.053 "params": { 00:23:20.053 "small_pool_count": 8192, 00:23:20.053 "large_pool_count": 1024, 00:23:20.053 "small_bufsize": 8192, 00:23:20.054 "large_bufsize": 135168, 00:23:20.054 "enable_numa": false 00:23:20.054 } 00:23:20.054 } 00:23:20.054 ] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "sock", 00:23:20.054 "config": [ 00:23:20.054 { 00:23:20.054 "method": "sock_set_default_impl", 00:23:20.054 "params": { 00:23:20.054 "impl_name": "posix" 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "sock_impl_set_options", 00:23:20.054 "params": { 00:23:20.054 "impl_name": "ssl", 00:23:20.054 "recv_buf_size": 4096, 00:23:20.054 "send_buf_size": 4096, 00:23:20.054 "enable_recv_pipe": true, 00:23:20.054 "enable_quickack": false, 00:23:20.054 "enable_placement_id": 0, 00:23:20.054 "enable_zerocopy_send_server": true, 00:23:20.054 "enable_zerocopy_send_client": false, 00:23:20.054 "zerocopy_threshold": 0, 00:23:20.054 "tls_version": 0, 00:23:20.054 "enable_ktls": false 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "sock_impl_set_options", 00:23:20.054 "params": { 00:23:20.054 "impl_name": "posix", 00:23:20.054 "recv_buf_size": 2097152, 00:23:20.054 "send_buf_size": 2097152, 00:23:20.054 "enable_recv_pipe": true, 00:23:20.054 "enable_quickack": false, 00:23:20.054 "enable_placement_id": 0, 00:23:20.054 "enable_zerocopy_send_server": true, 00:23:20.054 "enable_zerocopy_send_client": false, 00:23:20.054 "zerocopy_threshold": 0, 00:23:20.054 "tls_version": 0, 00:23:20.054 "enable_ktls": false 00:23:20.054 } 00:23:20.054 } 00:23:20.054 ] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "vmd", 00:23:20.054 "config": [] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "accel", 00:23:20.054 "config": [ 00:23:20.054 { 00:23:20.054 "method": "accel_set_options", 00:23:20.054 "params": { 00:23:20.054 "small_cache_size": 128, 00:23:20.054 "large_cache_size": 16, 00:23:20.054 "task_count": 2048, 00:23:20.054 "sequence_count": 2048, 00:23:20.054 "buf_count": 2048 00:23:20.054 } 00:23:20.054 } 00:23:20.054 ] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "bdev", 00:23:20.054 "config": [ 00:23:20.054 { 00:23:20.054 "method": "bdev_set_options", 00:23:20.054 "params": { 00:23:20.054 "bdev_io_pool_size": 65535, 00:23:20.054 "bdev_io_cache_size": 256, 00:23:20.054 "bdev_auto_examine": true, 00:23:20.054 "iobuf_small_cache_size": 128, 00:23:20.054 "iobuf_large_cache_size": 16 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_raid_set_options", 00:23:20.054 "params": { 00:23:20.054 "process_window_size_kb": 1024, 00:23:20.054 "process_max_bandwidth_mb_sec": 0 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_iscsi_set_options", 00:23:20.054 "params": { 00:23:20.054 "timeout_sec": 30 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_nvme_set_options", 00:23:20.054 "params": { 00:23:20.054 "action_on_timeout": "none", 00:23:20.054 "timeout_us": 0, 00:23:20.054 "timeout_admin_us": 0, 00:23:20.054 "keep_alive_timeout_ms": 10000, 00:23:20.054 "arbitration_burst": 0, 00:23:20.054 "low_priority_weight": 0, 00:23:20.054 "medium_priority_weight": 0, 00:23:20.054 "high_priority_weight": 0, 00:23:20.054 "nvme_adminq_poll_period_us": 10000, 00:23:20.054 "nvme_ioq_poll_period_us": 0, 00:23:20.054 "io_queue_requests": 0, 00:23:20.054 "delay_cmd_submit": true, 00:23:20.054 "transport_retry_count": 4, 00:23:20.054 "bdev_retry_count": 3, 00:23:20.054 "transport_ack_timeout": 0, 00:23:20.054 "ctrlr_loss_timeout_sec": 0, 00:23:20.054 "reconnect_delay_sec": 0, 00:23:20.054 "fast_io_fail_timeout_sec": 0, 00:23:20.054 "disable_auto_failback": false, 00:23:20.054 "generate_uuids": false, 00:23:20.054 "transport_tos": 0, 00:23:20.054 "nvme_error_stat": false, 00:23:20.054 "rdma_srq_size": 0, 00:23:20.054 "io_path_stat": false, 00:23:20.054 "allow_accel_sequence": false, 00:23:20.054 "rdma_max_cq_size": 0, 00:23:20.054 "rdma_cm_event_timeout_ms": 0, 00:23:20.054 "dhchap_digests": [ 00:23:20.054 "sha256", 00:23:20.054 "sha384", 00:23:20.054 "sha512" 00:23:20.054 ], 00:23:20.054 "dhchap_dhgroups": [ 00:23:20.054 "null", 00:23:20.054 "ffdhe2048", 00:23:20.054 "ffdhe3072", 00:23:20.054 "ffdhe4096", 00:23:20.054 "ffdhe6144", 00:23:20.054 "ffdhe8192" 00:23:20.054 ] 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_nvme_set_hotplug", 00:23:20.054 "params": { 00:23:20.054 "period_us": 100000, 00:23:20.054 "enable": false 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_malloc_create", 00:23:20.054 "params": { 00:23:20.054 "name": "malloc0", 00:23:20.054 "num_blocks": 8192, 00:23:20.054 "block_size": 4096, 00:23:20.054 "physical_block_size": 4096, 00:23:20.054 "uuid": "68e7d327-5a6c-4bc6-9d7f-8a328e816976", 00:23:20.054 "optimal_io_boundary": 0, 00:23:20.054 "md_size": 0, 00:23:20.054 "dif_type": 0, 00:23:20.054 "dif_is_head_of_md": false, 00:23:20.054 "dif_pi_format": 0 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "bdev_wait_for_examine" 00:23:20.054 } 00:23:20.054 ] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "nbd", 00:23:20.054 "config": [] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "scheduler", 00:23:20.054 "config": [ 00:23:20.054 { 00:23:20.054 "method": "framework_set_scheduler", 00:23:20.054 "params": { 00:23:20.054 "name": "static" 00:23:20.054 } 00:23:20.054 } 00:23:20.054 ] 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "subsystem": "nvmf", 00:23:20.054 "config": [ 00:23:20.054 { 00:23:20.054 "method": "nvmf_set_config", 00:23:20.054 "params": { 00:23:20.054 "discovery_filter": "match_any", 00:23:20.054 "admin_cmd_passthru": { 00:23:20.054 "identify_ctrlr": false 00:23:20.054 }, 00:23:20.054 "dhchap_digests": [ 00:23:20.054 "sha256", 00:23:20.054 "sha384", 00:23:20.054 "sha512" 00:23:20.054 ], 00:23:20.054 "dhchap_dhgroups": [ 00:23:20.054 "null", 00:23:20.054 "ffdhe2048", 00:23:20.054 "ffdhe3072", 00:23:20.054 "ffdhe4096", 00:23:20.054 "ffdhe6144", 00:23:20.054 "ffdhe8192" 00:23:20.054 ] 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "nvmf_set_max_subsystems", 00:23:20.054 "params": { 00:23:20.054 "max_subsystems": 1024 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "nvmf_set_crdt", 00:23:20.054 "params": { 00:23:20.054 "crdt1": 0, 00:23:20.054 "crdt2": 0, 00:23:20.054 "crdt3": 0 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "nvmf_create_transport", 00:23:20.054 "params": { 00:23:20.054 "trtype": "TCP", 00:23:20.054 "max_queue_depth": 128, 00:23:20.054 "max_io_qpairs_per_ctrlr": 127, 00:23:20.054 "in_capsule_data_size": 4096, 00:23:20.054 "max_io_size": 131072, 00:23:20.054 "io_unit_size": 131072, 00:23:20.054 "max_aq_depth": 128, 00:23:20.054 "num_shared_buffers": 511, 00:23:20.054 "buf_cache_size": 4294967295, 00:23:20.054 "dif_insert_or_strip": false, 00:23:20.054 "zcopy": false, 00:23:20.054 "c2h_success": false, 00:23:20.054 "sock_priority": 0, 00:23:20.054 "abort_timeout_sec": 1, 00:23:20.054 "ack_timeout": 0, 00:23:20.054 "data_wr_pool_size": 0 00:23:20.054 } 00:23:20.054 }, 00:23:20.054 { 00:23:20.054 "method": "nvmf_create_subsystem", 00:23:20.054 "params": { 00:23:20.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.054 "allow_any_host": false, 00:23:20.054 "serial_number": "SPDK00000000000001", 00:23:20.055 "model_number": "SPDK bdev Controller", 00:23:20.055 "max_namespaces": 10, 00:23:20.055 "min_cntlid": 1, 00:23:20.055 "max_cntlid": 65519, 00:23:20.055 "ana_reporting": false 00:23:20.055 } 00:23:20.055 }, 00:23:20.055 { 00:23:20.055 "method": "nvmf_subsystem_add_host", 00:23:20.055 "params": { 00:23:20.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.055 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.055 "psk": "key0" 00:23:20.055 } 00:23:20.055 }, 00:23:20.055 { 00:23:20.055 "method": "nvmf_subsystem_add_ns", 00:23:20.055 "params": { 00:23:20.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.055 "namespace": { 00:23:20.055 "nsid": 1, 00:23:20.055 "bdev_name": "malloc0", 00:23:20.055 "nguid": "68E7D3275A6C4BC69D7F8A328E816976", 00:23:20.055 "uuid": "68e7d327-5a6c-4bc6-9d7f-8a328e816976", 00:23:20.055 "no_auto_visible": false 00:23:20.055 } 00:23:20.055 } 00:23:20.055 }, 00:23:20.055 { 00:23:20.055 "method": "nvmf_subsystem_add_listener", 00:23:20.055 "params": { 00:23:20.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.055 "listen_address": { 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.055 "trtype": "TCP", 00:23:20.055 "adrfam": "IPv4", 00:23:20.055 "traddr": "10.0.0.2", 00:23:20.055 "trsvcid": "4420" 00:23:20.055 }, 00:23:20.055 "secure_channel": true 00:23:20.055 } 00:23:20.055 } 00:23:20.055 ] 00:23:20.055 } 00:23:20.055 ] 00:23:20.055 }' 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=210108 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 210108 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 210108 ']' 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.055 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.055 [2024-11-06 12:29:51.631816] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:20.055 [2024-11-06 12:29:51.631876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.314 [2024-11-06 12:29:51.703010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.314 [2024-11-06 12:29:51.741708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.314 [2024-11-06 12:29:51.741740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.314 [2024-11-06 12:29:51.741747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.314 [2024-11-06 12:29:51.741752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.314 [2024-11-06 12:29:51.741757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.314 [2024-11-06 12:29:51.742341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.573 [2024-11-06 12:29:51.953371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.573 [2024-11-06 12:29:51.985404] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.573 [2024-11-06 12:29:51.985644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=210391 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 210391 /var/tmp/bdevperf.sock 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 210391 ']' 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:21.141 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.142 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:21.142 "subsystems": [ 00:23:21.142 { 00:23:21.142 "subsystem": "keyring", 00:23:21.142 "config": [ 00:23:21.142 { 00:23:21.142 "method": "keyring_file_add_key", 00:23:21.142 "params": { 00:23:21.142 "name": "key0", 00:23:21.142 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:21.142 } 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "iobuf", 00:23:21.142 "config": [ 00:23:21.142 { 00:23:21.142 "method": "iobuf_set_options", 00:23:21.142 "params": { 00:23:21.142 "small_pool_count": 8192, 00:23:21.142 "large_pool_count": 1024, 00:23:21.142 "small_bufsize": 8192, 00:23:21.142 "large_bufsize": 135168, 00:23:21.142 "enable_numa": false 00:23:21.142 } 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "sock", 00:23:21.142 "config": [ 00:23:21.142 { 00:23:21.142 "method": "sock_set_default_impl", 00:23:21.142 "params": { 00:23:21.142 "impl_name": "posix" 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "sock_impl_set_options", 00:23:21.142 "params": { 00:23:21.142 "impl_name": "ssl", 00:23:21.142 "recv_buf_size": 4096, 00:23:21.142 "send_buf_size": 4096, 00:23:21.142 "enable_recv_pipe": true, 00:23:21.142 "enable_quickack": false, 00:23:21.142 "enable_placement_id": 0, 00:23:21.142 "enable_zerocopy_send_server": true, 00:23:21.142 "enable_zerocopy_send_client": false, 00:23:21.142 "zerocopy_threshold": 0, 00:23:21.142 "tls_version": 0, 00:23:21.142 "enable_ktls": false 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "sock_impl_set_options", 00:23:21.142 "params": { 00:23:21.142 "impl_name": "posix", 00:23:21.142 "recv_buf_size": 2097152, 00:23:21.142 "send_buf_size": 2097152, 00:23:21.142 "enable_recv_pipe": true, 00:23:21.142 "enable_quickack": false, 00:23:21.142 "enable_placement_id": 0, 00:23:21.142 "enable_zerocopy_send_server": true, 00:23:21.142 "enable_zerocopy_send_client": false, 00:23:21.142 "zerocopy_threshold": 0, 00:23:21.142 "tls_version": 0, 00:23:21.142 "enable_ktls": false 00:23:21.142 } 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "vmd", 00:23:21.142 "config": [] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "accel", 00:23:21.142 "config": [ 00:23:21.142 { 00:23:21.142 "method": "accel_set_options", 00:23:21.142 "params": { 00:23:21.142 "small_cache_size": 128, 00:23:21.142 "large_cache_size": 16, 00:23:21.142 "task_count": 2048, 00:23:21.142 "sequence_count": 2048, 00:23:21.142 "buf_count": 2048 00:23:21.142 } 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "bdev", 00:23:21.142 "config": [ 00:23:21.142 { 00:23:21.142 "method": "bdev_set_options", 00:23:21.142 "params": { 00:23:21.142 "bdev_io_pool_size": 65535, 00:23:21.142 "bdev_io_cache_size": 256, 00:23:21.142 "bdev_auto_examine": true, 00:23:21.142 "iobuf_small_cache_size": 128, 00:23:21.142 "iobuf_large_cache_size": 16 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_raid_set_options", 00:23:21.142 "params": { 00:23:21.142 "process_window_size_kb": 1024, 00:23:21.142 "process_max_bandwidth_mb_sec": 0 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_iscsi_set_options", 00:23:21.142 "params": { 00:23:21.142 "timeout_sec": 30 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_nvme_set_options", 00:23:21.142 "params": { 00:23:21.142 "action_on_timeout": "none", 00:23:21.142 "timeout_us": 0, 00:23:21.142 "timeout_admin_us": 0, 00:23:21.142 "keep_alive_timeout_ms": 10000, 00:23:21.142 "arbitration_burst": 0, 00:23:21.142 "low_priority_weight": 0, 00:23:21.142 "medium_priority_weight": 0, 00:23:21.142 "high_priority_weight": 0, 00:23:21.142 "nvme_adminq_poll_period_us": 10000, 00:23:21.142 "nvme_ioq_poll_period_us": 0, 00:23:21.142 "io_queue_requests": 512, 00:23:21.142 "delay_cmd_submit": true, 00:23:21.142 "transport_retry_count": 4, 00:23:21.142 "bdev_retry_count": 3, 00:23:21.142 "transport_ack_timeout": 0, 00:23:21.142 "ctrlr_loss_timeout_sec": 0, 00:23:21.142 "reconnect_delay_sec": 0, 00:23:21.142 "fast_io_fail_timeout_sec": 0, 00:23:21.142 "disable_auto_failback": false, 00:23:21.142 "generate_uuids": false, 00:23:21.142 "transport_tos": 0, 00:23:21.142 "nvme_error_stat": false, 00:23:21.142 "rdma_srq_size": 0, 00:23:21.142 "io_path_stat": false, 00:23:21.142 "allow_accel_sequence": false, 00:23:21.142 "rdma_max_cq_size": 0, 00:23:21.142 "rdma_cm_event_timeout_ms": 0, 00:23:21.142 "dhchap_digests": [ 00:23:21.142 "sha256", 00:23:21.142 "sha384", 00:23:21.142 "sha512" 00:23:21.142 ], 00:23:21.142 "dhchap_dhgroups": [ 00:23:21.142 "null", 00:23:21.142 "ffdhe2048", 00:23:21.142 "ffdhe3072", 00:23:21.142 "ffdhe4096", 00:23:21.142 "ffdhe6144", 00:23:21.142 "ffdhe8192" 00:23:21.142 ] 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_nvme_attach_controller", 00:23:21.142 "params": { 00:23:21.142 "name": "TLSTEST", 00:23:21.142 "trtype": "TCP", 00:23:21.142 "adrfam": "IPv4", 00:23:21.142 "traddr": "10.0.0.2", 00:23:21.142 "trsvcid": "4420", 00:23:21.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.142 "prchk_reftag": false, 00:23:21.142 "prchk_guard": false, 00:23:21.142 "ctrlr_loss_timeout_sec": 0, 00:23:21.142 "reconnect_delay_sec": 0, 00:23:21.142 "fast_io_fail_timeout_sec": 0, 00:23:21.142 "psk": "key0", 00:23:21.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.142 "hdgst": false, 00:23:21.142 "ddgst": false, 00:23:21.142 "multipath": "multipath" 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_nvme_set_hotplug", 00:23:21.142 "params": { 00:23:21.142 "period_us": 100000, 00:23:21.142 "enable": false 00:23:21.142 } 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "method": "bdev_wait_for_examine" 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }, 00:23:21.142 { 00:23:21.142 "subsystem": "nbd", 00:23:21.142 "config": [] 00:23:21.142 } 00:23:21.142 ] 00:23:21.142 }' 00:23:21.142 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:21.142 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.142 [2024-11-06 12:29:52.731417] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:21.142 [2024-11-06 12:29:52.731488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210391 ] 00:23:21.416 [2024-11-06 12:29:52.798155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.416 [2024-11-06 12:29:52.835962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.416 [2024-11-06 12:29:52.985891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.697 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:21.697 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:21.697 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.697 Running I/O for 10 seconds... 00:23:23.650 5817.00 IOPS, 22.72 MiB/s [2024-11-06T11:29:56.642Z] 5925.00 IOPS, 23.14 MiB/s [2024-11-06T11:29:57.579Z] 5939.67 IOPS, 23.20 MiB/s [2024-11-06T11:29:58.552Z] 5961.00 IOPS, 23.29 MiB/s [2024-11-06T11:29:59.488Z] 5953.20 IOPS, 23.25 MiB/s [2024-11-06T11:30:00.424Z] 5961.17 IOPS, 23.29 MiB/s [2024-11-06T11:30:01.360Z] 5961.86 IOPS, 23.29 MiB/s [2024-11-06T11:30:02.295Z] 5972.75 IOPS, 23.33 MiB/s [2024-11-06T11:30:03.673Z] 5975.11 IOPS, 23.34 MiB/s [2024-11-06T11:30:03.673Z] 5976.40 IOPS, 23.35 MiB/s 00:23:32.058 Latency(us) 00:23:32.058 [2024-11-06T11:30:03.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.058 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.058 Verification LBA range: start 0x0 length 0x2000 00:23:32.058 TLSTESTn1 : 10.02 5978.56 23.35 0.00 0.00 21376.83 6791.91 24546.21 00:23:32.058 [2024-11-06T11:30:03.673Z] =================================================================================================================== 00:23:32.058 [2024-11-06T11:30:03.673Z] Total : 5978.56 23.35 0.00 0.00 21376.83 6791.91 24546.21 00:23:32.058 { 00:23:32.058 "results": [ 00:23:32.058 { 00:23:32.058 "job": "TLSTESTn1", 00:23:32.058 "core_mask": "0x4", 00:23:32.058 "workload": "verify", 00:23:32.058 "status": "finished", 00:23:32.058 "verify_range": { 00:23:32.058 "start": 0, 00:23:32.058 "length": 8192 00:23:32.058 }, 00:23:32.058 "queue_depth": 128, 00:23:32.058 "io_size": 4096, 00:23:32.058 "runtime": 10.017456, 00:23:32.058 "iops": 5978.563818997558, 00:23:32.058 "mibps": 23.35376491795921, 00:23:32.058 "io_failed": 0, 00:23:32.058 "io_timeout": 0, 00:23:32.058 "avg_latency_us": 21376.831196587686, 00:23:32.058 "min_latency_us": 6791.912727272727, 00:23:32.059 "max_latency_us": 24546.21090909091 00:23:32.059 } 00:23:32.059 ], 00:23:32.059 "core_count": 1 00:23:32.059 } 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 210391 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 210391 ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 210391 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 210391 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 210391' 00:23:32.059 killing process with pid 210391 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 210391 00:23:32.059 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.059 00:23:32.059 Latency(us) 00:23:32.059 [2024-11-06T11:30:03.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.059 [2024-11-06T11:30:03.674Z] =================================================================================================================== 00:23:32.059 [2024-11-06T11:30:03.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 210391 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 210108 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 210108 ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 210108 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 210108 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 210108' 00:23:32.059 killing process with pid 210108 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 210108 00:23:32.059 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 210108 00:23:32.317 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:32.317 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.317 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=212376 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 212376 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 212376 ']' 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.318 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.318 [2024-11-06 12:30:03.777384] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:32.318 [2024-11-06 12:30:03.777443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.318 [2024-11-06 12:30:03.878474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.318 [2024-11-06 12:30:03.925963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.318 [2024-11-06 12:30:03.926003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.318 [2024-11-06 12:30:03.926013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.318 [2024-11-06 12:30:03.926022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.318 [2024-11-06 12:30:03.926030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.318 [2024-11-06 12:30:03.926756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Ipb4RD1IwM 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ipb4RD1IwM 00:23:32.577 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.836 [2024-11-06 12:30:04.317114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.836 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.095 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:33.355 [2024-11-06 12:30:04.862592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.355 [2024-11-06 12:30:04.862838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.355 12:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.614 malloc0 00:23:33.614 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.873 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:34.132 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=212910 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 212910 /var/tmp/bdevperf.sock 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 212910 ']' 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.391 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.650 [2024-11-06 12:30:06.032905] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:34.650 [2024-11-06 12:30:06.032970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid212910 ] 00:23:34.650 [2024-11-06 12:30:06.100742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.650 [2024-11-06 12:30:06.142990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.650 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.650 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.650 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:35.219 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:35.219 [2024-11-06 12:30:06.798603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.478 nvme0n1 00:23:35.478 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.478 Running I/O for 1 seconds... 00:23:36.673 4914.00 IOPS, 19.20 MiB/s 00:23:36.673 Latency(us) 00:23:36.673 [2024-11-06T11:30:08.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.673 Verification LBA range: start 0x0 length 0x2000 00:23:36.673 nvme0n1 : 1.02 4947.34 19.33 0.00 0.00 25630.60 6434.44 29312.47 00:23:36.673 [2024-11-06T11:30:08.288Z] =================================================================================================================== 00:23:36.673 [2024-11-06T11:30:08.288Z] Total : 4947.34 19.33 0.00 0.00 25630.60 6434.44 29312.47 00:23:36.673 { 00:23:36.673 "results": [ 00:23:36.673 { 00:23:36.673 "job": "nvme0n1", 00:23:36.673 "core_mask": "0x2", 00:23:36.673 "workload": "verify", 00:23:36.673 "status": "finished", 00:23:36.674 "verify_range": { 00:23:36.674 "start": 0, 00:23:36.674 "length": 8192 00:23:36.674 }, 00:23:36.674 "queue_depth": 128, 00:23:36.674 "io_size": 4096, 00:23:36.674 "runtime": 1.019335, 00:23:36.674 "iops": 4947.343120760103, 00:23:36.674 "mibps": 19.325559065469154, 00:23:36.674 "io_failed": 0, 00:23:36.674 "io_timeout": 0, 00:23:36.674 "avg_latency_us": 25630.603520992194, 00:23:36.674 "min_latency_us": 6434.443636363636, 00:23:36.674 "max_latency_us": 29312.465454545454 00:23:36.674 } 00:23:36.674 ], 00:23:36.674 "core_count": 1 00:23:36.674 } 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 212910 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 212910 ']' 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 212910 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 212910 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 212910' 00:23:36.674 killing process with pid 212910 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 212910 00:23:36.674 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.674 00:23:36.674 Latency(us) 00:23:36.674 [2024-11-06T11:30:08.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.674 [2024-11-06T11:30:08.289Z] =================================================================================================================== 00:23:36.674 [2024-11-06T11:30:08.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 212910 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 212376 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 212376 ']' 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 212376 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.674 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 212376 00:23:36.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 212376' 00:23:36.934 killing process with pid 212376 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 212376 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 212376 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=213500 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 213500 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 213500 ']' 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.934 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.193 [2024-11-06 12:30:08.588624] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:37.193 [2024-11-06 12:30:08.588685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.193 [2024-11-06 12:30:08.688291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.193 [2024-11-06 12:30:08.735874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.193 [2024-11-06 12:30:08.735916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.193 [2024-11-06 12:30:08.735926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.193 [2024-11-06 12:30:08.735935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.193 [2024-11-06 12:30:08.735943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.193 [2024-11-06 12:30:08.736675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 [2024-11-06 12:30:08.883488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.452 malloc0 00:23:37.452 [2024-11-06 12:30:08.912699] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.452 [2024-11-06 12:30:08.912943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.452 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=213687 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 213687 /var/tmp/bdevperf.sock 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 213687 ']' 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.453 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 [2024-11-06 12:30:08.992995] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:37.453 [2024-11-06 12:30:08.993051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213687 ] 00:23:37.453 [2024-11-06 12:30:09.059723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.712 [2024-11-06 12:30:09.100570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.712 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:37.712 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:37.712 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ipb4RD1IwM 00:23:37.970 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:38.233 [2024-11-06 12:30:09.759783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.233 nvme0n1 00:23:38.233 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.492 Running I/O for 1 seconds... 00:23:39.428 3572.00 IOPS, 13.95 MiB/s 00:23:39.428 Latency(us) 00:23:39.428 [2024-11-06T11:30:11.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.428 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.428 Verification LBA range: start 0x0 length 0x2000 00:23:39.429 nvme0n1 : 1.02 3629.65 14.18 0.00 0.00 34984.29 7447.27 45279.42 00:23:39.429 [2024-11-06T11:30:11.044Z] =================================================================================================================== 00:23:39.429 [2024-11-06T11:30:11.044Z] Total : 3629.65 14.18 0.00 0.00 34984.29 7447.27 45279.42 00:23:39.429 { 00:23:39.429 "results": [ 00:23:39.429 { 00:23:39.429 "job": "nvme0n1", 00:23:39.429 "core_mask": "0x2", 00:23:39.429 "workload": "verify", 00:23:39.429 "status": "finished", 00:23:39.429 "verify_range": { 00:23:39.429 "start": 0, 00:23:39.429 "length": 8192 00:23:39.429 }, 00:23:39.429 "queue_depth": 128, 00:23:39.429 "io_size": 4096, 00:23:39.429 "runtime": 1.019381, 00:23:39.429 "iops": 3629.653681989364, 00:23:39.429 "mibps": 14.178334695270953, 00:23:39.429 "io_failed": 0, 00:23:39.429 "io_timeout": 0, 00:23:39.429 "avg_latency_us": 34984.29226142506, 00:23:39.429 "min_latency_us": 7447.272727272727, 00:23:39.429 "max_latency_us": 45279.41818181818 00:23:39.429 } 00:23:39.429 ], 00:23:39.429 "core_count": 1 00:23:39.429 } 00:23:39.429 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:39.429 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.429 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.688 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.688 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:39.688 "subsystems": [ 00:23:39.688 { 00:23:39.688 "subsystem": "keyring", 00:23:39.688 "config": [ 00:23:39.688 { 00:23:39.688 "method": "keyring_file_add_key", 00:23:39.688 "params": { 00:23:39.688 "name": "key0", 00:23:39.688 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:39.688 } 00:23:39.688 } 00:23:39.688 ] 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "subsystem": "iobuf", 00:23:39.688 "config": [ 00:23:39.688 { 00:23:39.688 "method": "iobuf_set_options", 00:23:39.688 "params": { 00:23:39.688 "small_pool_count": 8192, 00:23:39.688 "large_pool_count": 1024, 00:23:39.688 "small_bufsize": 8192, 00:23:39.688 "large_bufsize": 135168, 00:23:39.688 "enable_numa": false 00:23:39.688 } 00:23:39.688 } 00:23:39.688 ] 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "subsystem": "sock", 00:23:39.688 "config": [ 00:23:39.688 { 00:23:39.688 "method": "sock_set_default_impl", 00:23:39.688 "params": { 00:23:39.688 "impl_name": "posix" 00:23:39.688 } 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "method": "sock_impl_set_options", 00:23:39.688 "params": { 00:23:39.688 "impl_name": "ssl", 00:23:39.688 "recv_buf_size": 4096, 00:23:39.688 "send_buf_size": 4096, 00:23:39.688 "enable_recv_pipe": true, 00:23:39.688 "enable_quickack": false, 00:23:39.688 "enable_placement_id": 0, 00:23:39.688 "enable_zerocopy_send_server": true, 00:23:39.688 "enable_zerocopy_send_client": false, 00:23:39.688 "zerocopy_threshold": 0, 00:23:39.688 "tls_version": 0, 00:23:39.688 "enable_ktls": false 00:23:39.688 } 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "method": "sock_impl_set_options", 00:23:39.688 "params": { 00:23:39.688 "impl_name": "posix", 00:23:39.688 "recv_buf_size": 2097152, 00:23:39.688 "send_buf_size": 2097152, 00:23:39.688 "enable_recv_pipe": true, 00:23:39.688 "enable_quickack": false, 00:23:39.688 "enable_placement_id": 0, 00:23:39.688 "enable_zerocopy_send_server": true, 00:23:39.688 "enable_zerocopy_send_client": false, 00:23:39.688 "zerocopy_threshold": 0, 00:23:39.688 "tls_version": 0, 00:23:39.688 "enable_ktls": false 00:23:39.688 } 00:23:39.688 } 00:23:39.688 ] 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "subsystem": "vmd", 00:23:39.688 "config": [] 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "subsystem": "accel", 00:23:39.688 "config": [ 00:23:39.688 { 00:23:39.688 "method": "accel_set_options", 00:23:39.688 "params": { 00:23:39.688 "small_cache_size": 128, 00:23:39.688 "large_cache_size": 16, 00:23:39.688 "task_count": 2048, 00:23:39.688 "sequence_count": 2048, 00:23:39.688 "buf_count": 2048 00:23:39.688 } 00:23:39.688 } 00:23:39.688 ] 00:23:39.688 }, 00:23:39.688 { 00:23:39.688 "subsystem": "bdev", 00:23:39.688 "config": [ 00:23:39.688 { 00:23:39.688 "method": "bdev_set_options", 00:23:39.688 "params": { 00:23:39.688 "bdev_io_pool_size": 65535, 00:23:39.689 "bdev_io_cache_size": 256, 00:23:39.689 "bdev_auto_examine": true, 00:23:39.689 "iobuf_small_cache_size": 128, 00:23:39.689 "iobuf_large_cache_size": 16 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_raid_set_options", 00:23:39.689 "params": { 00:23:39.689 "process_window_size_kb": 1024, 00:23:39.689 "process_max_bandwidth_mb_sec": 0 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_iscsi_set_options", 00:23:39.689 "params": { 00:23:39.689 "timeout_sec": 30 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_nvme_set_options", 00:23:39.689 "params": { 00:23:39.689 "action_on_timeout": "none", 00:23:39.689 "timeout_us": 0, 00:23:39.689 "timeout_admin_us": 0, 00:23:39.689 "keep_alive_timeout_ms": 10000, 00:23:39.689 "arbitration_burst": 0, 00:23:39.689 "low_priority_weight": 0, 00:23:39.689 "medium_priority_weight": 0, 00:23:39.689 "high_priority_weight": 0, 00:23:39.689 "nvme_adminq_poll_period_us": 10000, 00:23:39.689 "nvme_ioq_poll_period_us": 0, 00:23:39.689 "io_queue_requests": 0, 00:23:39.689 "delay_cmd_submit": true, 00:23:39.689 "transport_retry_count": 4, 00:23:39.689 "bdev_retry_count": 3, 00:23:39.689 "transport_ack_timeout": 0, 00:23:39.689 "ctrlr_loss_timeout_sec": 0, 00:23:39.689 "reconnect_delay_sec": 0, 00:23:39.689 "fast_io_fail_timeout_sec": 0, 00:23:39.689 "disable_auto_failback": false, 00:23:39.689 "generate_uuids": false, 00:23:39.689 "transport_tos": 0, 00:23:39.689 "nvme_error_stat": false, 00:23:39.689 "rdma_srq_size": 0, 00:23:39.689 "io_path_stat": false, 00:23:39.689 "allow_accel_sequence": false, 00:23:39.689 "rdma_max_cq_size": 0, 00:23:39.689 "rdma_cm_event_timeout_ms": 0, 00:23:39.689 "dhchap_digests": [ 00:23:39.689 "sha256", 00:23:39.689 "sha384", 00:23:39.689 "sha512" 00:23:39.689 ], 00:23:39.689 "dhchap_dhgroups": [ 00:23:39.689 "null", 00:23:39.689 "ffdhe2048", 00:23:39.689 "ffdhe3072", 00:23:39.689 "ffdhe4096", 00:23:39.689 "ffdhe6144", 00:23:39.689 "ffdhe8192" 00:23:39.689 ] 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_nvme_set_hotplug", 00:23:39.689 "params": { 00:23:39.689 "period_us": 100000, 00:23:39.689 "enable": false 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_malloc_create", 00:23:39.689 "params": { 00:23:39.689 "name": "malloc0", 00:23:39.689 "num_blocks": 8192, 00:23:39.689 "block_size": 4096, 00:23:39.689 "physical_block_size": 4096, 00:23:39.689 "uuid": "c16f7d79-660b-4aa5-9834-de7dc2aa38d0", 00:23:39.689 "optimal_io_boundary": 0, 00:23:39.689 "md_size": 0, 00:23:39.689 "dif_type": 0, 00:23:39.689 "dif_is_head_of_md": false, 00:23:39.689 "dif_pi_format": 0 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "bdev_wait_for_examine" 00:23:39.689 } 00:23:39.689 ] 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "subsystem": "nbd", 00:23:39.689 "config": [] 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "subsystem": "scheduler", 00:23:39.689 "config": [ 00:23:39.689 { 00:23:39.689 "method": "framework_set_scheduler", 00:23:39.689 "params": { 00:23:39.689 "name": "static" 00:23:39.689 } 00:23:39.689 } 00:23:39.689 ] 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "subsystem": "nvmf", 00:23:39.689 "config": [ 00:23:39.689 { 00:23:39.689 "method": "nvmf_set_config", 00:23:39.689 "params": { 00:23:39.689 "discovery_filter": "match_any", 00:23:39.689 "admin_cmd_passthru": { 00:23:39.689 "identify_ctrlr": false 00:23:39.689 }, 00:23:39.689 "dhchap_digests": [ 00:23:39.689 "sha256", 00:23:39.689 "sha384", 00:23:39.689 "sha512" 00:23:39.689 ], 00:23:39.689 "dhchap_dhgroups": [ 00:23:39.689 "null", 00:23:39.689 "ffdhe2048", 00:23:39.689 "ffdhe3072", 00:23:39.689 "ffdhe4096", 00:23:39.689 "ffdhe6144", 00:23:39.689 "ffdhe8192" 00:23:39.689 ] 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_set_max_subsystems", 00:23:39.689 "params": { 00:23:39.689 "max_subsystems": 1024 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_set_crdt", 00:23:39.689 "params": { 00:23:39.689 "crdt1": 0, 00:23:39.689 "crdt2": 0, 00:23:39.689 "crdt3": 0 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_create_transport", 00:23:39.689 "params": { 00:23:39.689 "trtype": "TCP", 00:23:39.689 "max_queue_depth": 128, 00:23:39.689 "max_io_qpairs_per_ctrlr": 127, 00:23:39.689 "in_capsule_data_size": 4096, 00:23:39.689 "max_io_size": 131072, 00:23:39.689 "io_unit_size": 131072, 00:23:39.689 "max_aq_depth": 128, 00:23:39.689 "num_shared_buffers": 511, 00:23:39.689 "buf_cache_size": 4294967295, 00:23:39.689 "dif_insert_or_strip": false, 00:23:39.689 "zcopy": false, 00:23:39.689 "c2h_success": false, 00:23:39.689 "sock_priority": 0, 00:23:39.689 "abort_timeout_sec": 1, 00:23:39.689 "ack_timeout": 0, 00:23:39.689 "data_wr_pool_size": 0 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_create_subsystem", 00:23:39.689 "params": { 00:23:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.689 "allow_any_host": false, 00:23:39.689 "serial_number": "00000000000000000000", 00:23:39.689 "model_number": "SPDK bdev Controller", 00:23:39.689 "max_namespaces": 32, 00:23:39.689 "min_cntlid": 1, 00:23:39.689 "max_cntlid": 65519, 00:23:39.689 "ana_reporting": false 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_subsystem_add_host", 00:23:39.689 "params": { 00:23:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.689 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.689 "psk": "key0" 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_subsystem_add_ns", 00:23:39.689 "params": { 00:23:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.689 "namespace": { 00:23:39.689 "nsid": 1, 00:23:39.689 "bdev_name": "malloc0", 00:23:39.689 "nguid": "C16F7D79660B4AA59834DE7DC2AA38D0", 00:23:39.689 "uuid": "c16f7d79-660b-4aa5-9834-de7dc2aa38d0", 00:23:39.689 "no_auto_visible": false 00:23:39.689 } 00:23:39.689 } 00:23:39.689 }, 00:23:39.689 { 00:23:39.689 "method": "nvmf_subsystem_add_listener", 00:23:39.689 "params": { 00:23:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.689 "listen_address": { 00:23:39.689 "trtype": "TCP", 00:23:39.689 "adrfam": "IPv4", 00:23:39.689 "traddr": "10.0.0.2", 00:23:39.689 "trsvcid": "4420" 00:23:39.689 }, 00:23:39.689 "secure_channel": false, 00:23:39.689 "sock_impl": "ssl" 00:23:39.689 } 00:23:39.689 } 00:23:39.689 ] 00:23:39.689 } 00:23:39.689 ] 00:23:39.689 }' 00:23:39.689 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:39.949 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:39.949 "subsystems": [ 00:23:39.949 { 00:23:39.949 "subsystem": "keyring", 00:23:39.949 "config": [ 00:23:39.949 { 00:23:39.949 "method": "keyring_file_add_key", 00:23:39.949 "params": { 00:23:39.949 "name": "key0", 00:23:39.949 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:39.949 } 00:23:39.949 } 00:23:39.949 ] 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "subsystem": "iobuf", 00:23:39.949 "config": [ 00:23:39.949 { 00:23:39.949 "method": "iobuf_set_options", 00:23:39.949 "params": { 00:23:39.949 "small_pool_count": 8192, 00:23:39.949 "large_pool_count": 1024, 00:23:39.949 "small_bufsize": 8192, 00:23:39.949 "large_bufsize": 135168, 00:23:39.949 "enable_numa": false 00:23:39.949 } 00:23:39.949 } 00:23:39.949 ] 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "subsystem": "sock", 00:23:39.949 "config": [ 00:23:39.949 { 00:23:39.949 "method": "sock_set_default_impl", 00:23:39.949 "params": { 00:23:39.949 "impl_name": "posix" 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "sock_impl_set_options", 00:23:39.949 "params": { 00:23:39.949 "impl_name": "ssl", 00:23:39.949 "recv_buf_size": 4096, 00:23:39.949 "send_buf_size": 4096, 00:23:39.949 "enable_recv_pipe": true, 00:23:39.949 "enable_quickack": false, 00:23:39.949 "enable_placement_id": 0, 00:23:39.949 "enable_zerocopy_send_server": true, 00:23:39.949 "enable_zerocopy_send_client": false, 00:23:39.949 "zerocopy_threshold": 0, 00:23:39.949 "tls_version": 0, 00:23:39.949 "enable_ktls": false 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "sock_impl_set_options", 00:23:39.949 "params": { 00:23:39.949 "impl_name": "posix", 00:23:39.949 "recv_buf_size": 2097152, 00:23:39.949 "send_buf_size": 2097152, 00:23:39.949 "enable_recv_pipe": true, 00:23:39.949 "enable_quickack": false, 00:23:39.949 "enable_placement_id": 0, 00:23:39.949 "enable_zerocopy_send_server": true, 00:23:39.949 "enable_zerocopy_send_client": false, 00:23:39.949 "zerocopy_threshold": 0, 00:23:39.949 "tls_version": 0, 00:23:39.949 "enable_ktls": false 00:23:39.949 } 00:23:39.949 } 00:23:39.949 ] 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "subsystem": "vmd", 00:23:39.949 "config": [] 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "subsystem": "accel", 00:23:39.949 "config": [ 00:23:39.949 { 00:23:39.949 "method": "accel_set_options", 00:23:39.949 "params": { 00:23:39.949 "small_cache_size": 128, 00:23:39.949 "large_cache_size": 16, 00:23:39.949 "task_count": 2048, 00:23:39.949 "sequence_count": 2048, 00:23:39.949 "buf_count": 2048 00:23:39.949 } 00:23:39.949 } 00:23:39.949 ] 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "subsystem": "bdev", 00:23:39.949 "config": [ 00:23:39.949 { 00:23:39.949 "method": "bdev_set_options", 00:23:39.949 "params": { 00:23:39.949 "bdev_io_pool_size": 65535, 00:23:39.949 "bdev_io_cache_size": 256, 00:23:39.949 "bdev_auto_examine": true, 00:23:39.949 "iobuf_small_cache_size": 128, 00:23:39.949 "iobuf_large_cache_size": 16 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_raid_set_options", 00:23:39.949 "params": { 00:23:39.949 "process_window_size_kb": 1024, 00:23:39.949 "process_max_bandwidth_mb_sec": 0 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_iscsi_set_options", 00:23:39.949 "params": { 00:23:39.949 "timeout_sec": 30 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_nvme_set_options", 00:23:39.949 "params": { 00:23:39.949 "action_on_timeout": "none", 00:23:39.949 "timeout_us": 0, 00:23:39.949 "timeout_admin_us": 0, 00:23:39.949 "keep_alive_timeout_ms": 10000, 00:23:39.949 "arbitration_burst": 0, 00:23:39.949 "low_priority_weight": 0, 00:23:39.949 "medium_priority_weight": 0, 00:23:39.949 "high_priority_weight": 0, 00:23:39.949 "nvme_adminq_poll_period_us": 10000, 00:23:39.949 "nvme_ioq_poll_period_us": 0, 00:23:39.949 "io_queue_requests": 512, 00:23:39.949 "delay_cmd_submit": true, 00:23:39.949 "transport_retry_count": 4, 00:23:39.949 "bdev_retry_count": 3, 00:23:39.949 "transport_ack_timeout": 0, 00:23:39.949 "ctrlr_loss_timeout_sec": 0, 00:23:39.949 "reconnect_delay_sec": 0, 00:23:39.949 "fast_io_fail_timeout_sec": 0, 00:23:39.949 "disable_auto_failback": false, 00:23:39.949 "generate_uuids": false, 00:23:39.949 "transport_tos": 0, 00:23:39.949 "nvme_error_stat": false, 00:23:39.949 "rdma_srq_size": 0, 00:23:39.949 "io_path_stat": false, 00:23:39.949 "allow_accel_sequence": false, 00:23:39.949 "rdma_max_cq_size": 0, 00:23:39.949 "rdma_cm_event_timeout_ms": 0, 00:23:39.949 "dhchap_digests": [ 00:23:39.949 "sha256", 00:23:39.949 "sha384", 00:23:39.949 "sha512" 00:23:39.949 ], 00:23:39.949 "dhchap_dhgroups": [ 00:23:39.949 "null", 00:23:39.949 "ffdhe2048", 00:23:39.949 "ffdhe3072", 00:23:39.949 "ffdhe4096", 00:23:39.949 "ffdhe6144", 00:23:39.949 "ffdhe8192" 00:23:39.949 ] 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_nvme_attach_controller", 00:23:39.949 "params": { 00:23:39.949 "name": "nvme0", 00:23:39.949 "trtype": "TCP", 00:23:39.949 "adrfam": "IPv4", 00:23:39.949 "traddr": "10.0.0.2", 00:23:39.949 "trsvcid": "4420", 00:23:39.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.949 "prchk_reftag": false, 00:23:39.949 "prchk_guard": false, 00:23:39.949 "ctrlr_loss_timeout_sec": 0, 00:23:39.949 "reconnect_delay_sec": 0, 00:23:39.949 "fast_io_fail_timeout_sec": 0, 00:23:39.949 "psk": "key0", 00:23:39.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.949 "hdgst": false, 00:23:39.949 "ddgst": false, 00:23:39.949 "multipath": "multipath" 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_nvme_set_hotplug", 00:23:39.949 "params": { 00:23:39.949 "period_us": 100000, 00:23:39.949 "enable": false 00:23:39.949 } 00:23:39.949 }, 00:23:39.949 { 00:23:39.949 "method": "bdev_enable_histogram", 00:23:39.949 "params": { 00:23:39.950 "name": "nvme0n1", 00:23:39.950 "enable": true 00:23:39.950 } 00:23:39.950 }, 00:23:39.950 { 00:23:39.950 "method": "bdev_wait_for_examine" 00:23:39.950 } 00:23:39.950 ] 00:23:39.950 }, 00:23:39.950 { 00:23:39.950 "subsystem": "nbd", 00:23:39.950 "config": [] 00:23:39.950 } 00:23:39.950 ] 00:23:39.950 }' 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 213687 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 213687 ']' 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 213687 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 213687 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 213687' 00:23:39.950 killing process with pid 213687 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 213687 00:23:39.950 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.950 00:23:39.950 Latency(us) 00:23:39.950 [2024-11-06T11:30:11.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.950 [2024-11-06T11:30:11.565Z] =================================================================================================================== 00:23:39.950 [2024-11-06T11:30:11.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.950 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 213687 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 213500 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 213500 ']' 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 213500 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 213500 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 213500' 00:23:40.209 killing process with pid 213500 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 213500 00:23:40.209 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 213500 00:23:40.468 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:40.468 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.468 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.468 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:40.468 "subsystems": [ 00:23:40.468 { 00:23:40.468 "subsystem": "keyring", 00:23:40.468 "config": [ 00:23:40.468 { 00:23:40.468 "method": "keyring_file_add_key", 00:23:40.468 "params": { 00:23:40.468 "name": "key0", 00:23:40.468 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:40.468 } 00:23:40.468 } 00:23:40.468 ] 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "subsystem": "iobuf", 00:23:40.468 "config": [ 00:23:40.468 { 00:23:40.468 "method": "iobuf_set_options", 00:23:40.468 "params": { 00:23:40.468 "small_pool_count": 8192, 00:23:40.468 "large_pool_count": 1024, 00:23:40.468 "small_bufsize": 8192, 00:23:40.468 "large_bufsize": 135168, 00:23:40.468 "enable_numa": false 00:23:40.468 } 00:23:40.468 } 00:23:40.468 ] 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "subsystem": "sock", 00:23:40.468 "config": [ 00:23:40.468 { 00:23:40.468 "method": "sock_set_default_impl", 00:23:40.468 "params": { 00:23:40.468 "impl_name": "posix" 00:23:40.468 } 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "method": "sock_impl_set_options", 00:23:40.468 "params": { 00:23:40.468 "impl_name": "ssl", 00:23:40.468 "recv_buf_size": 4096, 00:23:40.468 "send_buf_size": 4096, 00:23:40.468 "enable_recv_pipe": true, 00:23:40.468 "enable_quickack": false, 00:23:40.468 "enable_placement_id": 0, 00:23:40.468 "enable_zerocopy_send_server": true, 00:23:40.468 "enable_zerocopy_send_client": false, 00:23:40.468 "zerocopy_threshold": 0, 00:23:40.468 "tls_version": 0, 00:23:40.468 "enable_ktls": false 00:23:40.468 } 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "method": "sock_impl_set_options", 00:23:40.468 "params": { 00:23:40.468 "impl_name": "posix", 00:23:40.468 "recv_buf_size": 2097152, 00:23:40.468 "send_buf_size": 2097152, 00:23:40.468 "enable_recv_pipe": true, 00:23:40.468 "enable_quickack": false, 00:23:40.468 "enable_placement_id": 0, 00:23:40.468 "enable_zerocopy_send_server": true, 00:23:40.468 "enable_zerocopy_send_client": false, 00:23:40.468 "zerocopy_threshold": 0, 00:23:40.468 "tls_version": 0, 00:23:40.468 "enable_ktls": false 00:23:40.468 } 00:23:40.468 } 00:23:40.468 ] 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "subsystem": "vmd", 00:23:40.468 "config": [] 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "subsystem": "accel", 00:23:40.468 "config": [ 00:23:40.468 { 00:23:40.468 "method": "accel_set_options", 00:23:40.468 "params": { 00:23:40.468 "small_cache_size": 128, 00:23:40.468 "large_cache_size": 16, 00:23:40.468 "task_count": 2048, 00:23:40.468 "sequence_count": 2048, 00:23:40.468 "buf_count": 2048 00:23:40.468 } 00:23:40.468 } 00:23:40.468 ] 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "subsystem": "bdev", 00:23:40.468 "config": [ 00:23:40.468 { 00:23:40.468 "method": "bdev_set_options", 00:23:40.468 "params": { 00:23:40.468 "bdev_io_pool_size": 65535, 00:23:40.468 "bdev_io_cache_size": 256, 00:23:40.468 "bdev_auto_examine": true, 00:23:40.468 "iobuf_small_cache_size": 128, 00:23:40.468 "iobuf_large_cache_size": 16 00:23:40.468 } 00:23:40.468 }, 00:23:40.468 { 00:23:40.468 "method": "bdev_raid_set_options", 00:23:40.468 "params": { 00:23:40.469 "process_window_size_kb": 1024, 00:23:40.469 "process_max_bandwidth_mb_sec": 0 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "bdev_iscsi_set_options", 00:23:40.469 "params": { 00:23:40.469 "timeout_sec": 30 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "bdev_nvme_set_options", 00:23:40.469 "params": { 00:23:40.469 "action_on_timeout": "none", 00:23:40.469 "timeout_us": 0, 00:23:40.469 "timeout_admin_us": 0, 00:23:40.469 "keep_alive_timeout_ms": 10000, 00:23:40.469 "arbitration_burst": 0, 00:23:40.469 "low_priority_weight": 0, 00:23:40.469 "medium_priority_weight": 0, 00:23:40.469 "high_priority_weight": 0, 00:23:40.469 "nvme_adminq_poll_period_us": 10000, 00:23:40.469 "nvme_ioq_poll_period_us": 0, 00:23:40.469 "io_queue_requests": 0, 00:23:40.469 "delay_cmd_submit": true, 00:23:40.469 "transport_retry_count": 4, 00:23:40.469 "bdev_retry_count": 3, 00:23:40.469 "transport_ack_timeout": 0, 00:23:40.469 "ctrlr_loss_timeout_sec": 0, 00:23:40.469 "reconnect_delay_sec": 0, 00:23:40.469 "fast_io_fail_timeout_sec": 0, 00:23:40.469 "disable_auto_failback": false, 00:23:40.469 "generate_uuids": false, 00:23:40.469 "transport_tos": 0, 00:23:40.469 "nvme_error_stat": false, 00:23:40.469 "rdma_srq_size": 0, 00:23:40.469 "io_path_stat": false, 00:23:40.469 "allow_accel_sequence": false, 00:23:40.469 "rdma_max_cq_size": 0, 00:23:40.469 "rdma_cm_event_timeout_ms": 0, 00:23:40.469 "dhchap_digests": [ 00:23:40.469 "sha256", 00:23:40.469 "sha384", 00:23:40.469 "sha512" 00:23:40.469 ], 00:23:40.469 "dhchap_dhgroups": [ 00:23:40.469 "null", 00:23:40.469 "ffdhe2048", 00:23:40.469 "ffdhe3072", 00:23:40.469 "ffdhe4096", 00:23:40.469 "ffdhe6144", 00:23:40.469 "ffdhe8192" 00:23:40.469 ] 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "bdev_nvme_set_hotplug", 00:23:40.469 "params": { 00:23:40.469 "period_us": 100000, 00:23:40.469 "enable": false 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "bdev_malloc_create", 00:23:40.469 "params": { 00:23:40.469 "name": "malloc0", 00:23:40.469 "num_blocks": 8192, 00:23:40.469 "block_size": 4096, 00:23:40.469 "physical_block_size": 4096, 00:23:40.469 "uuid": "c16f7d79-660b-4aa5-9834-de7dc2aa38d0", 00:23:40.469 "optimal_io_boundary": 0, 00:23:40.469 "md_size": 0, 00:23:40.469 "dif_type": 0, 00:23:40.469 "dif_is_head_of_md": false, 00:23:40.469 "dif_pi_format": 0 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "bdev_wait_for_examine" 00:23:40.469 } 00:23:40.469 ] 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "subsystem": "nbd", 00:23:40.469 "config": [] 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "subsystem": "scheduler", 00:23:40.469 "config": [ 00:23:40.469 { 00:23:40.469 "method": "framework_set_scheduler", 00:23:40.469 "params": { 00:23:40.469 "name": "static" 00:23:40.469 } 00:23:40.469 } 00:23:40.469 ] 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "subsystem": "nvmf", 00:23:40.469 "config": [ 00:23:40.469 { 00:23:40.469 "method": "nvmf_set_config", 00:23:40.469 "params": { 00:23:40.469 "discovery_filter": "match_any", 00:23:40.469 "admin_cmd_passthru": { 00:23:40.469 "identify_ctrlr": false 00:23:40.469 }, 00:23:40.469 "dhchap_digests": [ 00:23:40.469 "sha256", 00:23:40.469 "sha384", 00:23:40.469 "sha512" 00:23:40.469 ], 00:23:40.469 "dhchap_dhgroups": [ 00:23:40.469 "null", 00:23:40.469 "ffdhe2048", 00:23:40.469 "ffdhe3072", 00:23:40.469 "ffdhe4096", 00:23:40.469 "ffdhe6144", 00:23:40.469 "ffdhe8192" 00:23:40.469 ] 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_set_max_subsystems", 00:23:40.469 "params": { 00:23:40.469 "max_subsystems": 1024 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_set_crdt", 00:23:40.469 "params": { 00:23:40.469 "crdt1": 0, 00:23:40.469 "crdt2": 0, 00:23:40.469 "crdt3": 0 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_create_transport", 00:23:40.469 "params": { 00:23:40.469 "trtype": "TCP", 00:23:40.469 "max_queue_depth": 128, 00:23:40.469 "max_io_qpairs_per_ctrlr": 127, 00:23:40.469 "in_capsule_data_size": 4096, 00:23:40.469 "max_io_size": 131072, 00:23:40.469 "io_unit_size": 131072, 00:23:40.469 "max_aq_depth": 128, 00:23:40.469 "num_shared_buffers": 511, 00:23:40.469 "buf_cache_size": 4294967295, 00:23:40.469 "dif_insert_or_strip": false, 00:23:40.469 "zcopy": false, 00:23:40.469 "c2h_success": false, 00:23:40.469 "sock_priority": 0, 00:23:40.469 "abort_timeout_sec": 1, 00:23:40.469 "ack_timeout": 0, 00:23:40.469 "data_wr_pool_size": 0 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_create_subsystem", 00:23:40.469 "params": { 00:23:40.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.469 "allow_any_host": false, 00:23:40.469 "serial_number": "00000000000000000000", 00:23:40.469 "model_number": "SPDK bdev Controller", 00:23:40.469 "max_namespaces": 32, 00:23:40.469 "min_cntlid": 1, 00:23:40.469 "max_cntlid": 65519, 00:23:40.469 "ana_reporting": false 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_subsystem_add_host", 00:23:40.469 "params": { 00:23:40.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.469 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.469 "psk": "key0" 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_subsystem_add_ns", 00:23:40.469 "params": { 00:23:40.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.469 "namespace": { 00:23:40.469 "nsid": 1, 00:23:40.469 "bdev_name": "malloc0", 00:23:40.469 "nguid": "C16F7D79660B4AA59834DE7DC2AA38D0", 00:23:40.469 "uuid": "c16f7d79-660b-4aa5-9834-de7dc2aa38d0", 00:23:40.469 "no_auto_visible": false 00:23:40.469 } 00:23:40.469 } 00:23:40.469 }, 00:23:40.469 { 00:23:40.469 "method": "nvmf_subsystem_add_listener", 00:23:40.469 "params": { 00:23:40.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.469 "listen_address": { 00:23:40.469 "trtype": "TCP", 00:23:40.469 "adrfam": "IPv4", 00:23:40.469 "traddr": "10.0.0.2", 00:23:40.469 "trsvcid": "4420" 00:23:40.469 }, 00:23:40.469 "secure_channel": false, 00:23:40.469 "sock_impl": "ssl" 00:23:40.469 } 00:23:40.469 } 00:23:40.469 ] 00:23:40.469 } 00:23:40.469 ] 00:23:40.469 }' 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=214394 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 214394 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 214394 ']' 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:40.469 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.469 [2024-11-06 12:30:11.985469] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:40.469 [2024-11-06 12:30:11.985527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.469 [2024-11-06 12:30:12.084453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.729 [2024-11-06 12:30:12.132171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.729 [2024-11-06 12:30:12.132211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.729 [2024-11-06 12:30:12.132222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.729 [2024-11-06 12:30:12.132231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.729 [2024-11-06 12:30:12.132239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.729 [2024-11-06 12:30:12.133005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.987 [2024-11-06 12:30:12.354223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.987 [2024-11-06 12:30:12.386232] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.987 [2024-11-06 12:30:12.386483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.556 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:41.556 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:41.556 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.556 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.556 12:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=214606 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 214606 /var/tmp/bdevperf.sock 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 214606 ']' 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:41.556 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:41.556 "subsystems": [ 00:23:41.556 { 00:23:41.556 "subsystem": "keyring", 00:23:41.556 "config": [ 00:23:41.556 { 00:23:41.556 "method": "keyring_file_add_key", 00:23:41.556 "params": { 00:23:41.556 "name": "key0", 00:23:41.556 "path": "/tmp/tmp.Ipb4RD1IwM" 00:23:41.556 } 00:23:41.556 } 00:23:41.556 ] 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "subsystem": "iobuf", 00:23:41.556 "config": [ 00:23:41.556 { 00:23:41.556 "method": "iobuf_set_options", 00:23:41.556 "params": { 00:23:41.556 "small_pool_count": 8192, 00:23:41.556 "large_pool_count": 1024, 00:23:41.556 "small_bufsize": 8192, 00:23:41.556 "large_bufsize": 135168, 00:23:41.556 "enable_numa": false 00:23:41.556 } 00:23:41.556 } 00:23:41.556 ] 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "subsystem": "sock", 00:23:41.556 "config": [ 00:23:41.556 { 00:23:41.556 "method": "sock_set_default_impl", 00:23:41.556 "params": { 00:23:41.556 "impl_name": "posix" 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "sock_impl_set_options", 00:23:41.556 "params": { 00:23:41.556 "impl_name": "ssl", 00:23:41.556 "recv_buf_size": 4096, 00:23:41.556 "send_buf_size": 4096, 00:23:41.556 "enable_recv_pipe": true, 00:23:41.556 "enable_quickack": false, 00:23:41.556 "enable_placement_id": 0, 00:23:41.556 "enable_zerocopy_send_server": true, 00:23:41.556 "enable_zerocopy_send_client": false, 00:23:41.556 "zerocopy_threshold": 0, 00:23:41.556 "tls_version": 0, 00:23:41.556 "enable_ktls": false 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "sock_impl_set_options", 00:23:41.556 "params": { 00:23:41.556 "impl_name": "posix", 00:23:41.556 "recv_buf_size": 2097152, 00:23:41.556 "send_buf_size": 2097152, 00:23:41.556 "enable_recv_pipe": true, 00:23:41.556 "enable_quickack": false, 00:23:41.556 "enable_placement_id": 0, 00:23:41.556 "enable_zerocopy_send_server": true, 00:23:41.556 "enable_zerocopy_send_client": false, 00:23:41.556 "zerocopy_threshold": 0, 00:23:41.556 "tls_version": 0, 00:23:41.556 "enable_ktls": false 00:23:41.556 } 00:23:41.556 } 00:23:41.556 ] 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "subsystem": "vmd", 00:23:41.556 "config": [] 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "subsystem": "accel", 00:23:41.556 "config": [ 00:23:41.556 { 00:23:41.556 "method": "accel_set_options", 00:23:41.556 "params": { 00:23:41.556 "small_cache_size": 128, 00:23:41.556 "large_cache_size": 16, 00:23:41.556 "task_count": 2048, 00:23:41.556 "sequence_count": 2048, 00:23:41.556 "buf_count": 2048 00:23:41.556 } 00:23:41.556 } 00:23:41.556 ] 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "subsystem": "bdev", 00:23:41.556 "config": [ 00:23:41.556 { 00:23:41.556 "method": "bdev_set_options", 00:23:41.556 "params": { 00:23:41.556 "bdev_io_pool_size": 65535, 00:23:41.556 "bdev_io_cache_size": 256, 00:23:41.556 "bdev_auto_examine": true, 00:23:41.556 "iobuf_small_cache_size": 128, 00:23:41.556 "iobuf_large_cache_size": 16 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "bdev_raid_set_options", 00:23:41.556 "params": { 00:23:41.556 "process_window_size_kb": 1024, 00:23:41.556 "process_max_bandwidth_mb_sec": 0 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "bdev_iscsi_set_options", 00:23:41.556 "params": { 00:23:41.556 "timeout_sec": 30 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "bdev_nvme_set_options", 00:23:41.556 "params": { 00:23:41.556 "action_on_timeout": "none", 00:23:41.556 "timeout_us": 0, 00:23:41.556 "timeout_admin_us": 0, 00:23:41.556 "keep_alive_timeout_ms": 10000, 00:23:41.556 "arbitration_burst": 0, 00:23:41.556 "low_priority_weight": 0, 00:23:41.556 "medium_priority_weight": 0, 00:23:41.556 "high_priority_weight": 0, 00:23:41.556 "nvme_adminq_poll_period_us": 10000, 00:23:41.556 "nvme_ioq_poll_period_us": 0, 00:23:41.556 "io_queue_requests": 512, 00:23:41.556 "delay_cmd_submit": true, 00:23:41.556 "transport_retry_count": 4, 00:23:41.556 "bdev_retry_count": 3, 00:23:41.556 "transport_ack_timeout": 0, 00:23:41.556 "ctrlr_loss_timeout_sec": 0, 00:23:41.556 "reconnect_delay_sec": 0, 00:23:41.556 "fast_io_fail_timeout_sec": 0, 00:23:41.556 "disable_auto_failback": false, 00:23:41.556 "generate_uuids": false, 00:23:41.556 "transport_tos": 0, 00:23:41.556 "nvme_error_stat": false, 00:23:41.556 "rdma_srq_size": 0, 00:23:41.556 "io_path_stat": false, 00:23:41.556 "allow_accel_sequence": false, 00:23:41.556 "rdma_max_cq_size": 0, 00:23:41.556 "rdma_cm_event_timeout_ms": 0, 00:23:41.556 "dhchap_digests": [ 00:23:41.556 "sha256", 00:23:41.556 "sha384", 00:23:41.556 "sha512" 00:23:41.556 ], 00:23:41.556 "dhchap_dhgroups": [ 00:23:41.556 "null", 00:23:41.556 "ffdhe2048", 00:23:41.556 "ffdhe3072", 00:23:41.556 "ffdhe4096", 00:23:41.556 "ffdhe6144", 00:23:41.556 "ffdhe8192" 00:23:41.556 ] 00:23:41.556 } 00:23:41.556 }, 00:23:41.556 { 00:23:41.556 "method": "bdev_nvme_attach_controller", 00:23:41.556 "params": { 00:23:41.556 "name": "nvme0", 00:23:41.556 "trtype": "TCP", 00:23:41.556 "adrfam": "IPv4", 00:23:41.556 "traddr": "10.0.0.2", 00:23:41.556 "trsvcid": "4420", 00:23:41.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.556 "prchk_reftag": false, 00:23:41.557 "prchk_guard": false, 00:23:41.557 "ctrlr_loss_timeout_sec": 0, 00:23:41.557 "reconnect_delay_sec": 0, 00:23:41.557 "fast_io_fail_timeout_sec": 0, 00:23:41.557 "psk": "key0", 00:23:41.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.557 "hdgst": false, 00:23:41.557 "ddgst": false, 00:23:41.557 "multipath": "multipath" 00:23:41.557 } 00:23:41.557 }, 00:23:41.557 { 00:23:41.557 "method": "bdev_nvme_set_hotplug", 00:23:41.557 "params": { 00:23:41.557 "period_us": 100000, 00:23:41.557 "enable": false 00:23:41.557 } 00:23:41.557 }, 00:23:41.557 { 00:23:41.557 "method": "bdev_enable_histogram", 00:23:41.557 "params": { 00:23:41.557 "name": "nvme0n1", 00:23:41.557 "enable": true 00:23:41.557 } 00:23:41.557 }, 00:23:41.557 { 00:23:41.557 "method": "bdev_wait_for_examine" 00:23:41.557 } 00:23:41.557 ] 00:23:41.557 }, 00:23:41.557 { 00:23:41.557 "subsystem": "nbd", 00:23:41.557 "config": [] 00:23:41.557 } 00:23:41.557 ] 00:23:41.557 }' 00:23:41.557 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.557 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:41.557 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.557 [2024-11-06 12:30:13.040673] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:41.557 [2024-11-06 12:30:13.040717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214606 ] 00:23:41.557 [2024-11-06 12:30:13.094654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.557 [2024-11-06 12:30:13.134654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.815 [2024-11-06 12:30:13.284683] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.382 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:42.382 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:42.382 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.382 12:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:42.641 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.641 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.900 Running I/O for 1 seconds... 00:23:43.835 3756.00 IOPS, 14.67 MiB/s 00:23:43.835 Latency(us) 00:23:43.835 [2024-11-06T11:30:15.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.835 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.835 Verification LBA range: start 0x0 length 0x2000 00:23:43.835 nvme0n1 : 1.02 3795.50 14.83 0.00 0.00 33425.85 6166.34 31695.59 00:23:43.835 [2024-11-06T11:30:15.450Z] =================================================================================================================== 00:23:43.835 [2024-11-06T11:30:15.450Z] Total : 3795.50 14.83 0.00 0.00 33425.85 6166.34 31695.59 00:23:43.835 { 00:23:43.835 "results": [ 00:23:43.835 { 00:23:43.835 "job": "nvme0n1", 00:23:43.835 "core_mask": "0x2", 00:23:43.835 "workload": "verify", 00:23:43.835 "status": "finished", 00:23:43.835 "verify_range": { 00:23:43.835 "start": 0, 00:23:43.835 "length": 8192 00:23:43.835 }, 00:23:43.835 "queue_depth": 128, 00:23:43.835 "io_size": 4096, 00:23:43.835 "runtime": 1.023317, 00:23:43.835 "iops": 3795.500319060467, 00:23:43.835 "mibps": 14.82617312132995, 00:23:43.835 "io_failed": 0, 00:23:43.835 "io_timeout": 0, 00:23:43.835 "avg_latency_us": 33425.84881565397, 00:23:43.835 "min_latency_us": 6166.341818181818, 00:23:43.835 "max_latency_us": 31695.592727272728 00:23:43.835 } 00:23:43.835 ], 00:23:43.835 "core_count": 1 00:23:43.835 } 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:23:43.835 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:43.835 nvmf_trace.0 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 214606 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 214606 ']' 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 214606 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 214606 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 214606' 00:23:44.094 killing process with pid 214606 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 214606 00:23:44.094 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.094 00:23:44.094 Latency(us) 00:23:44.094 [2024-11-06T11:30:15.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.094 [2024-11-06T11:30:15.709Z] =================================================================================================================== 00:23:44.094 [2024-11-06T11:30:15.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.094 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 214606 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.353 rmmod nvme_tcp 00:23:44.353 rmmod nvme_fabrics 00:23:44.353 rmmod nvme_keyring 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 214394 ']' 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 214394 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 214394 ']' 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 214394 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 214394 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 214394' 00:23:44.353 killing process with pid 214394 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 214394 00:23:44.353 12:30:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 214394 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.612 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.518 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.518 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Q6dnfAfZNh /tmp/tmp.CqiUGnuqbh /tmp/tmp.Ipb4RD1IwM 00:23:46.518 00:23:46.518 real 1m25.168s 00:23:46.518 user 2m13.788s 00:23:46.518 sys 0m32.645s 00:23:46.518 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:46.518 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.518 ************************************ 00:23:46.518 END TEST nvmf_tls 00:23:46.518 ************************************ 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.778 ************************************ 00:23:46.778 START TEST nvmf_fips 00:23:46.778 ************************************ 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:46.778 * Looking for test storage... 00:23:46.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.778 --rc genhtml_branch_coverage=1 00:23:46.778 --rc genhtml_function_coverage=1 00:23:46.778 --rc genhtml_legend=1 00:23:46.778 --rc geninfo_all_blocks=1 00:23:46.778 --rc geninfo_unexecuted_blocks=1 00:23:46.778 00:23:46.778 ' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.778 --rc genhtml_branch_coverage=1 00:23:46.778 --rc genhtml_function_coverage=1 00:23:46.778 --rc genhtml_legend=1 00:23:46.778 --rc geninfo_all_blocks=1 00:23:46.778 --rc geninfo_unexecuted_blocks=1 00:23:46.778 00:23:46.778 ' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.778 --rc genhtml_branch_coverage=1 00:23:46.778 --rc genhtml_function_coverage=1 00:23:46.778 --rc genhtml_legend=1 00:23:46.778 --rc geninfo_all_blocks=1 00:23:46.778 --rc geninfo_unexecuted_blocks=1 00:23:46.778 00:23:46.778 ' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:46.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.778 --rc genhtml_branch_coverage=1 00:23:46.778 --rc genhtml_function_coverage=1 00:23:46.778 --rc genhtml_legend=1 00:23:46.778 --rc geninfo_all_blocks=1 00:23:46.778 --rc geninfo_unexecuted_blocks=1 00:23:46.778 00:23:46.778 ' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.778 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.038 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:47.039 Error setting digest 00:23:47.039 4082E52B427F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:47.039 4082E52B427F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.039 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.615 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.615 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.616 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.616 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:23:53.616 00:23:53.616 --- 10.0.0.2 ping statistics --- 00:23:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.616 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:53.616 00:23:53.616 --- 10.0.0.1 ping statistics --- 00:23:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.616 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=218715 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 218715 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 218715 ']' 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.616 [2024-11-06 12:30:24.493650] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:53.616 [2024-11-06 12:30:24.493713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.616 [2024-11-06 12:30:24.567064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.616 [2024-11-06 12:30:24.608170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.616 [2024-11-06 12:30:24.608204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.616 [2024-11-06 12:30:24.608212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.616 [2024-11-06 12:30:24.608218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.616 [2024-11-06 12:30:24.608223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.616 [2024-11-06 12:30:24.608774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:53.616 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.KMK 00:23:53.617 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.617 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.KMK 00:23:53.617 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.KMK 00:23:53.617 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.KMK 00:23:53.617 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.617 [2024-11-06 12:30:25.002393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.617 [2024-11-06 12:30:25.018405] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.617 [2024-11-06 12:30:25.018631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.617 malloc0 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=218990 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 218990 /var/tmp/bdevperf.sock 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 218990 ']' 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.617 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.617 [2024-11-06 12:30:25.150397] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:23:53.617 [2024-11-06 12:30:25.150468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218990 ] 00:23:53.617 [2024-11-06 12:30:25.216688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.875 [2024-11-06 12:30:25.256894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.875 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.875 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:23:53.875 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.KMK 00:23:54.133 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.390 [2024-11-06 12:30:25.870657] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.390 TLSTESTn1 00:23:54.390 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.649 Running I/O for 10 seconds... 00:23:56.523 5736.00 IOPS, 22.41 MiB/s [2024-11-06T11:30:29.516Z] 5856.50 IOPS, 22.88 MiB/s [2024-11-06T11:30:30.086Z] 5905.33 IOPS, 23.07 MiB/s [2024-11-06T11:30:31.463Z] 5878.00 IOPS, 22.96 MiB/s [2024-11-06T11:30:32.400Z] 5902.20 IOPS, 23.06 MiB/s [2024-11-06T11:30:33.336Z] 5909.17 IOPS, 23.08 MiB/s [2024-11-06T11:30:34.272Z] 5892.29 IOPS, 23.02 MiB/s [2024-11-06T11:30:35.209Z] 5911.12 IOPS, 23.09 MiB/s [2024-11-06T11:30:36.145Z] 5921.33 IOPS, 23.13 MiB/s [2024-11-06T11:30:36.145Z] 5933.20 IOPS, 23.18 MiB/s 00:24:04.530 Latency(us) 00:24:04.530 [2024-11-06T11:30:36.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.530 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.530 Verification LBA range: start 0x0 length 0x2000 00:24:04.530 TLSTESTn1 : 10.02 5932.79 23.17 0.00 0.00 21537.45 6702.55 23116.33 00:24:04.530 [2024-11-06T11:30:36.145Z] =================================================================================================================== 00:24:04.530 [2024-11-06T11:30:36.145Z] Total : 5932.79 23.17 0.00 0.00 21537.45 6702.55 23116.33 00:24:04.530 { 00:24:04.530 "results": [ 00:24:04.530 { 00:24:04.530 "job": "TLSTESTn1", 00:24:04.530 "core_mask": "0x4", 00:24:04.530 "workload": "verify", 00:24:04.530 "status": "finished", 00:24:04.530 "verify_range": { 00:24:04.530 "start": 0, 00:24:04.530 "length": 8192 00:24:04.530 }, 00:24:04.530 "queue_depth": 128, 00:24:04.530 "io_size": 4096, 00:24:04.530 "runtime": 10.021925, 00:24:04.530 "iops": 5932.792352766559, 00:24:04.530 "mibps": 23.17497012799437, 00:24:04.530 "io_failed": 0, 00:24:04.530 "io_timeout": 0, 00:24:04.530 "avg_latency_us": 21537.45160397408, 00:24:04.530 "min_latency_us": 6702.545454545455, 00:24:04.530 "max_latency_us": 23116.334545454545 00:24:04.530 } 00:24:04.530 ], 00:24:04.530 "core_count": 1 00:24:04.530 } 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:04.530 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:04.530 nvmf_trace.0 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 218990 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 218990 ']' 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 218990 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 218990 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 218990' 00:24:04.789 killing process with pid 218990 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 218990 00:24:04.789 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.789 00:24:04.789 Latency(us) 00:24:04.789 [2024-11-06T11:30:36.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.789 [2024-11-06T11:30:36.404Z] =================================================================================================================== 00:24:04.789 [2024-11-06T11:30:36.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.789 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 218990 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.049 rmmod nvme_tcp 00:24:05.049 rmmod nvme_fabrics 00:24:05.049 rmmod nvme_keyring 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 218715 ']' 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 218715 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 218715 ']' 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 218715 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 218715 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 218715' 00:24:05.049 killing process with pid 218715 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 218715 00:24:05.049 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 218715 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.308 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.213 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.213 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.KMK 00:24:07.213 00:24:07.213 real 0m20.635s 00:24:07.213 user 0m21.847s 00:24:07.213 sys 0m9.783s 00:24:07.213 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.213 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.213 ************************************ 00:24:07.213 END TEST nvmf_fips 00:24:07.213 ************************************ 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:07.472 ************************************ 00:24:07.472 START TEST nvmf_control_msg_list 00:24:07.472 ************************************ 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:07.472 * Looking for test storage... 00:24:07.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:07.472 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:07.472 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.473 --rc genhtml_branch_coverage=1 00:24:07.473 --rc genhtml_function_coverage=1 00:24:07.473 --rc genhtml_legend=1 00:24:07.473 --rc geninfo_all_blocks=1 00:24:07.473 --rc geninfo_unexecuted_blocks=1 00:24:07.473 00:24:07.473 ' 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.473 --rc genhtml_branch_coverage=1 00:24:07.473 --rc genhtml_function_coverage=1 00:24:07.473 --rc genhtml_legend=1 00:24:07.473 --rc geninfo_all_blocks=1 00:24:07.473 --rc geninfo_unexecuted_blocks=1 00:24:07.473 00:24:07.473 ' 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.473 --rc genhtml_branch_coverage=1 00:24:07.473 --rc genhtml_function_coverage=1 00:24:07.473 --rc genhtml_legend=1 00:24:07.473 --rc geninfo_all_blocks=1 00:24:07.473 --rc geninfo_unexecuted_blocks=1 00:24:07.473 00:24:07.473 ' 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.473 --rc genhtml_branch_coverage=1 00:24:07.473 --rc genhtml_function_coverage=1 00:24:07.473 --rc genhtml_legend=1 00:24:07.473 --rc geninfo_all_blocks=1 00:24:07.473 --rc geninfo_unexecuted_blocks=1 00:24:07.473 00:24:07.473 ' 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.473 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.732 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:07.733 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:13.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:13.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.006 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:13.007 Found net devices under 0000:af:00.0: cvl_0_0 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:13.007 Found net devices under 0000:af:00.1: cvl_0_1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.007 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:24:13.267 00:24:13.267 --- 10.0.0.2 ping statistics --- 00:24:13.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.267 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:13.267 00:24:13.267 --- 10.0.0.1 ping statistics --- 00:24:13.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.267 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=224552 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 224552 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 224552 ']' 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.267 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.267 [2024-11-06 12:30:44.737258] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:24:13.267 [2024-11-06 12:30:44.737318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.267 [2024-11-06 12:30:44.839334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.526 [2024-11-06 12:30:44.887364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.526 [2024-11-06 12:30:44.887402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.526 [2024-11-06 12:30:44.887412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.526 [2024-11-06 12:30:44.887421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.526 [2024-11-06 12:30:44.887428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.526 [2024-11-06 12:30:44.888121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.526 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.526 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:24:13.526 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.526 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.526 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 [2024-11-06 12:30:45.029795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 Malloc0 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.526 [2024-11-06 12:30:45.067026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=224585 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=224586 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=224587 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 224585 00:24:13.526 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.526 [2024-11-06 12:30:45.135424] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.785 [2024-11-06 12:30:45.145620] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.785 [2024-11-06 12:30:45.145804] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.720 Initializing NVMe Controllers 00:24:14.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:14.720 Initialization complete. Launching workers. 00:24:14.720 ======================================================== 00:24:14.720 Latency(us) 00:24:14.720 Device Information : IOPS MiB/s Average min max 00:24:14.720 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4395.98 17.17 227.13 183.10 41147.77 00:24:14.720 ======================================================== 00:24:14.720 Total : 4395.98 17.17 227.13 183.10 41147.77 00:24:14.720 00:24:14.720 Initializing NVMe Controllers 00:24:14.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:14.720 Initialization complete. Launching workers. 00:24:14.720 ======================================================== 00:24:14.720 Latency(us) 00:24:14.720 Device Information : IOPS MiB/s Average min max 00:24:14.720 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 29.00 0.11 35292.71 183.23 41022.57 00:24:14.720 ======================================================== 00:24:14.720 Total : 29.00 0.11 35292.71 183.23 41022.57 00:24:14.720 00:24:14.720 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 224586 00:24:14.720 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 224587 00:24:14.979 Initializing NVMe Controllers 00:24:14.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:14.980 Initialization complete. Launching workers. 00:24:14.980 ======================================================== 00:24:14.980 Latency(us) 00:24:14.980 Device Information : IOPS MiB/s Average min max 00:24:14.980 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4372.00 17.08 228.38 181.47 41039.56 00:24:14.980 ======================================================== 00:24:14.980 Total : 4372.00 17.08 228.38 181.47 41039.56 00:24:14.980 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.980 rmmod nvme_tcp 00:24:14.980 rmmod nvme_fabrics 00:24:14.980 rmmod nvme_keyring 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 224552 ']' 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 224552 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 224552 ']' 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 224552 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 224552 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 224552' 00:24:14.980 killing process with pid 224552 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 224552 00:24:14.980 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 224552 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.239 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.776 00:24:17.776 real 0m9.926s 00:24:17.776 user 0m6.716s 00:24:17.776 sys 0m5.296s 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 ************************************ 00:24:17.776 END TEST nvmf_control_msg_list 00:24:17.776 ************************************ 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 ************************************ 00:24:17.776 START TEST nvmf_wait_for_buf 00:24:17.776 ************************************ 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:17.776 * Looking for test storage... 00:24:17.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:17.776 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:17.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.776 --rc genhtml_branch_coverage=1 00:24:17.776 --rc genhtml_function_coverage=1 00:24:17.776 --rc genhtml_legend=1 00:24:17.776 --rc geninfo_all_blocks=1 00:24:17.776 --rc geninfo_unexecuted_blocks=1 00:24:17.776 00:24:17.776 ' 00:24:17.776 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:17.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.777 --rc genhtml_branch_coverage=1 00:24:17.777 --rc genhtml_function_coverage=1 00:24:17.777 --rc genhtml_legend=1 00:24:17.777 --rc geninfo_all_blocks=1 00:24:17.777 --rc geninfo_unexecuted_blocks=1 00:24:17.777 00:24:17.777 ' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:17.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.777 --rc genhtml_branch_coverage=1 00:24:17.777 --rc genhtml_function_coverage=1 00:24:17.777 --rc genhtml_legend=1 00:24:17.777 --rc geninfo_all_blocks=1 00:24:17.777 --rc geninfo_unexecuted_blocks=1 00:24:17.777 00:24:17.777 ' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:17.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.777 --rc genhtml_branch_coverage=1 00:24:17.777 --rc genhtml_function_coverage=1 00:24:17.777 --rc genhtml_legend=1 00:24:17.777 --rc geninfo_all_blocks=1 00:24:17.777 --rc geninfo_unexecuted_blocks=1 00:24:17.777 00:24:17.777 ' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.048 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:23.049 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:23.049 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:23.049 Found net devices under 0000:af:00.0: cvl_0_0 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:23.049 Found net devices under 0000:af:00.1: cvl_0_1 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.049 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:23.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:24:23.049 00:24:23.049 --- 10.0.0.2 ping statistics --- 00:24:23.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.049 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:24:23.049 00:24:23.049 --- 10.0.0.1 ping statistics --- 00:24:23.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.049 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=228328 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 228328 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 228328 ']' 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.049 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:23.050 [2024-11-06 12:30:54.284089] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:24:23.050 [2024-11-06 12:30:54.284148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.050 [2024-11-06 12:30:54.384217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.050 [2024-11-06 12:30:54.432064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.050 [2024-11-06 12:30:54.432105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.050 [2024-11-06 12:30:54.432116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.050 [2024-11-06 12:30:54.432125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.050 [2024-11-06 12:30:54.432133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.050 [2024-11-06 12:30:54.432840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 Malloc0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 [2024-11-06 12:30:54.631050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 [2024-11-06 12:30:54.655222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.050 12:30:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.309 [2024-11-06 12:30:54.743564] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:24.686 Initializing NVMe Controllers 00:24:24.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:24.686 Initialization complete. Launching workers. 00:24:24.686 ======================================================== 00:24:24.686 Latency(us) 00:24:24.686 Device Information : IOPS MiB/s Average min max 00:24:24.686 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.59 7270.98 63847.06 00:24:24.686 ======================================================== 00:24:24.686 Total : 129.00 16.12 32238.59 7270.98 63847.06 00:24:24.686 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.686 rmmod nvme_tcp 00:24:24.686 rmmod nvme_fabrics 00:24:24.686 rmmod nvme_keyring 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 228328 ']' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 228328 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 228328 ']' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 228328 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 228328 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 228328' 00:24:24.686 killing process with pid 228328 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 228328 00:24:24.686 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 228328 00:24:24.945 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.946 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.480 00:24:27.480 real 0m9.617s 00:24:27.480 user 0m3.528s 00:24:27.480 sys 0m4.272s 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.480 ************************************ 00:24:27.480 END TEST nvmf_wait_for_buf 00:24:27.480 ************************************ 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.480 12:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:32.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:32.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:32.758 Found net devices under 0000:af:00.0: cvl_0_0 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:32.758 Found net devices under 0000:af:00.1: cvl_0_1 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.758 ************************************ 00:24:32.758 START TEST nvmf_perf_adq 00:24:32.758 ************************************ 00:24:32.758 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:32.758 * Looking for test storage... 00:24:32.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.758 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.759 --rc genhtml_branch_coverage=1 00:24:32.759 --rc genhtml_function_coverage=1 00:24:32.759 --rc genhtml_legend=1 00:24:32.759 --rc geninfo_all_blocks=1 00:24:32.759 --rc geninfo_unexecuted_blocks=1 00:24:32.759 00:24:32.759 ' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.759 --rc genhtml_branch_coverage=1 00:24:32.759 --rc genhtml_function_coverage=1 00:24:32.759 --rc genhtml_legend=1 00:24:32.759 --rc geninfo_all_blocks=1 00:24:32.759 --rc geninfo_unexecuted_blocks=1 00:24:32.759 00:24:32.759 ' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.759 --rc genhtml_branch_coverage=1 00:24:32.759 --rc genhtml_function_coverage=1 00:24:32.759 --rc genhtml_legend=1 00:24:32.759 --rc geninfo_all_blocks=1 00:24:32.759 --rc geninfo_unexecuted_blocks=1 00:24:32.759 00:24:32.759 ' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.759 --rc genhtml_branch_coverage=1 00:24:32.759 --rc genhtml_function_coverage=1 00:24:32.759 --rc genhtml_legend=1 00:24:32.759 --rc geninfo_all_blocks=1 00:24:32.759 --rc geninfo_unexecuted_blocks=1 00:24:32.759 00:24:32.759 ' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:32.759 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.760 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.032 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:38.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:38.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:38.033 Found net devices under 0000:af:00.0: cvl_0_0 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:38.033 Found net devices under 0000:af:00.1: cvl_0_1 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:38.033 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:39.417 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:41.475 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:46.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:46.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:46.779 Found net devices under 0000:af:00.0: cvl_0_0 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:46.779 Found net devices under 0000:af:00.1: cvl_0_1 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.779 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.780 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:24:46.780 00:24:46.780 --- 10.0.0.2 ping statistics --- 00:24:46.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.780 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:46.780 00:24:46.780 --- 10.0.0.1 ping statistics --- 00:24:46.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.780 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=237232 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 237232 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 237232 ']' 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.780 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.780 [2024-11-06 12:31:18.381100] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:24:46.780 [2024-11-06 12:31:18.381160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.038 [2024-11-06 12:31:18.481251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.038 [2024-11-06 12:31:18.532485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.038 [2024-11-06 12:31:18.532529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.038 [2024-11-06 12:31:18.532539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.038 [2024-11-06 12:31:18.532548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.038 [2024-11-06 12:31:18.532556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.038 [2024-11-06 12:31:18.534643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.038 [2024-11-06 12:31:18.534748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.038 [2024-11-06 12:31:18.534849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.038 [2024-11-06 12:31:18.534854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.038 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 [2024-11-06 12:31:18.791381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 Malloc1 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 [2024-11-06 12:31:18.855362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=237291 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:47.297 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:49.827 "tick_rate": 2200000000, 00:24:49.827 "poll_groups": [ 00:24:49.827 { 00:24:49.827 "name": "nvmf_tgt_poll_group_000", 00:24:49.827 "admin_qpairs": 1, 00:24:49.827 "io_qpairs": 1, 00:24:49.827 "current_admin_qpairs": 1, 00:24:49.827 "current_io_qpairs": 1, 00:24:49.827 "pending_bdev_io": 0, 00:24:49.827 "completed_nvme_io": 21189, 00:24:49.827 "transports": [ 00:24:49.827 { 00:24:49.827 "trtype": "TCP" 00:24:49.827 } 00:24:49.827 ] 00:24:49.827 }, 00:24:49.827 { 00:24:49.827 "name": "nvmf_tgt_poll_group_001", 00:24:49.827 "admin_qpairs": 0, 00:24:49.827 "io_qpairs": 1, 00:24:49.827 "current_admin_qpairs": 0, 00:24:49.827 "current_io_qpairs": 1, 00:24:49.827 "pending_bdev_io": 0, 00:24:49.827 "completed_nvme_io": 20610, 00:24:49.827 "transports": [ 00:24:49.827 { 00:24:49.827 "trtype": "TCP" 00:24:49.827 } 00:24:49.827 ] 00:24:49.827 }, 00:24:49.827 { 00:24:49.827 "name": "nvmf_tgt_poll_group_002", 00:24:49.827 "admin_qpairs": 0, 00:24:49.827 "io_qpairs": 1, 00:24:49.827 "current_admin_qpairs": 0, 00:24:49.827 "current_io_qpairs": 1, 00:24:49.827 "pending_bdev_io": 0, 00:24:49.827 "completed_nvme_io": 21237, 00:24:49.827 "transports": [ 00:24:49.827 { 00:24:49.827 "trtype": "TCP" 00:24:49.827 } 00:24:49.827 ] 00:24:49.827 }, 00:24:49.827 { 00:24:49.827 "name": "nvmf_tgt_poll_group_003", 00:24:49.827 "admin_qpairs": 0, 00:24:49.827 "io_qpairs": 1, 00:24:49.827 "current_admin_qpairs": 0, 00:24:49.827 "current_io_qpairs": 1, 00:24:49.827 "pending_bdev_io": 0, 00:24:49.827 "completed_nvme_io": 15917, 00:24:49.827 "transports": [ 00:24:49.827 { 00:24:49.827 "trtype": "TCP" 00:24:49.827 } 00:24:49.827 ] 00:24:49.827 } 00:24:49.827 ] 00:24:49.827 }' 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:49.827 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 237291 00:24:57.936 Initializing NVMe Controllers 00:24:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:57.936 Initialization complete. Launching workers. 00:24:57.936 ======================================================== 00:24:57.936 Latency(us) 00:24:57.936 Device Information : IOPS MiB/s Average min max 00:24:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8334.90 32.56 7678.41 2035.39 12802.49 00:24:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10941.30 42.74 5849.08 1791.60 9695.78 00:24:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11131.90 43.48 5750.22 1705.95 9306.99 00:24:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11163.10 43.61 5733.92 1875.56 9417.78 00:24:57.936 ======================================================== 00:24:57.936 Total : 41571.20 162.39 6158.46 1705.95 12802.49 00:24:57.936 00:24:57.936 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:57.936 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.936 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.936 rmmod nvme_tcp 00:24:57.936 rmmod nvme_fabrics 00:24:57.936 rmmod nvme_keyring 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 237232 ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 237232 ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 237232' 00:24:57.936 killing process with pid 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 237232 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.936 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.838 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.838 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:59.838 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:59.838 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:01.213 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:03.117 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:08.391 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:08.391 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.391 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.392 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.392 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.392 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.392 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.392 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:25:08.393 00:25:08.393 --- 10.0.0.2 ping statistics --- 00:25:08.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.393 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:25:08.393 00:25:08.393 --- 10.0.0.1 ping statistics --- 00:25:08.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.393 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:08.393 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:08.393 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:08.651 net.core.busy_poll = 1 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:08.651 net.core.busy_read = 1 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=241348 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 241348 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 241348 ']' 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.651 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:08.910 [2024-11-06 12:31:40.284289] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:08.910 [2024-11-06 12:31:40.284347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.910 [2024-11-06 12:31:40.385383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.910 [2024-11-06 12:31:40.435638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.910 [2024-11-06 12:31:40.435673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.910 [2024-11-06 12:31:40.435684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.910 [2024-11-06 12:31:40.435693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.910 [2024-11-06 12:31:40.435701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.910 [2024-11-06 12:31:40.437486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.910 [2024-11-06 12:31:40.437527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.910 [2024-11-06 12:31:40.437522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.910 [2024-11-06 12:31:40.437506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 [2024-11-06 12:31:40.744745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.169 Malloc1 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.169 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.428 [2024-11-06 12:31:40.803093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=241610 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:09.428 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:11.332 "tick_rate": 2200000000, 00:25:11.332 "poll_groups": [ 00:25:11.332 { 00:25:11.332 "name": "nvmf_tgt_poll_group_000", 00:25:11.332 "admin_qpairs": 1, 00:25:11.332 "io_qpairs": 1, 00:25:11.332 "current_admin_qpairs": 1, 00:25:11.332 "current_io_qpairs": 1, 00:25:11.332 "pending_bdev_io": 0, 00:25:11.332 "completed_nvme_io": 31311, 00:25:11.332 "transports": [ 00:25:11.332 { 00:25:11.332 "trtype": "TCP" 00:25:11.332 } 00:25:11.332 ] 00:25:11.332 }, 00:25:11.332 { 00:25:11.332 "name": "nvmf_tgt_poll_group_001", 00:25:11.332 "admin_qpairs": 0, 00:25:11.332 "io_qpairs": 3, 00:25:11.332 "current_admin_qpairs": 0, 00:25:11.332 "current_io_qpairs": 3, 00:25:11.332 "pending_bdev_io": 0, 00:25:11.332 "completed_nvme_io": 32540, 00:25:11.332 "transports": [ 00:25:11.332 { 00:25:11.332 "trtype": "TCP" 00:25:11.332 } 00:25:11.332 ] 00:25:11.332 }, 00:25:11.332 { 00:25:11.332 "name": "nvmf_tgt_poll_group_002", 00:25:11.332 "admin_qpairs": 0, 00:25:11.332 "io_qpairs": 0, 00:25:11.332 "current_admin_qpairs": 0, 00:25:11.332 "current_io_qpairs": 0, 00:25:11.332 "pending_bdev_io": 0, 00:25:11.332 "completed_nvme_io": 0, 00:25:11.332 "transports": [ 00:25:11.332 { 00:25:11.332 "trtype": "TCP" 00:25:11.332 } 00:25:11.332 ] 00:25:11.332 }, 00:25:11.332 { 00:25:11.332 "name": "nvmf_tgt_poll_group_003", 00:25:11.332 "admin_qpairs": 0, 00:25:11.332 "io_qpairs": 0, 00:25:11.332 "current_admin_qpairs": 0, 00:25:11.332 "current_io_qpairs": 0, 00:25:11.332 "pending_bdev_io": 0, 00:25:11.332 "completed_nvme_io": 0, 00:25:11.332 "transports": [ 00:25:11.332 { 00:25:11.332 "trtype": "TCP" 00:25:11.332 } 00:25:11.332 ] 00:25:11.332 } 00:25:11.332 ] 00:25:11.332 }' 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:11.332 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 241610 00:25:19.453 Initializing NVMe Controllers 00:25:19.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:19.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:19.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:19.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:19.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:19.453 Initialization complete. Launching workers. 00:25:19.453 ======================================================== 00:25:19.453 Latency(us) 00:25:19.453 Device Information : IOPS MiB/s Average min max 00:25:19.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6143.80 24.00 10444.65 1635.55 58065.71 00:25:19.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 16218.20 63.35 3945.81 1208.32 6796.04 00:25:19.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4820.10 18.83 13279.74 1362.60 59384.43 00:25:19.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5766.90 22.53 11099.23 1434.58 57634.10 00:25:19.453 ======================================================== 00:25:19.453 Total : 32949.00 128.71 7775.10 1208.32 59384.43 00:25:19.453 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.453 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.453 rmmod nvme_tcp 00:25:19.453 rmmod nvme_fabrics 00:25:19.453 rmmod nvme_keyring 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 241348 ']' 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 241348 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 241348 ']' 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 241348 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:19.453 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 241348 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 241348' 00:25:19.712 killing process with pid 241348 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 241348 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 241348 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:19.712 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.713 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.250 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.250 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:22.250 00:25:22.250 real 0m49.440s 00:25:22.250 user 2m44.294s 00:25:22.250 sys 0m10.335s 00:25:22.250 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:22.250 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:22.250 ************************************ 00:25:22.250 END TEST nvmf_perf_adq 00:25:22.250 ************************************ 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.251 ************************************ 00:25:22.251 START TEST nvmf_shutdown 00:25:22.251 ************************************ 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:22.251 * Looking for test storage... 00:25:22.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:22.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.251 --rc genhtml_branch_coverage=1 00:25:22.251 --rc genhtml_function_coverage=1 00:25:22.251 --rc genhtml_legend=1 00:25:22.251 --rc geninfo_all_blocks=1 00:25:22.251 --rc geninfo_unexecuted_blocks=1 00:25:22.251 00:25:22.251 ' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:22.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.251 --rc genhtml_branch_coverage=1 00:25:22.251 --rc genhtml_function_coverage=1 00:25:22.251 --rc genhtml_legend=1 00:25:22.251 --rc geninfo_all_blocks=1 00:25:22.251 --rc geninfo_unexecuted_blocks=1 00:25:22.251 00:25:22.251 ' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:22.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.251 --rc genhtml_branch_coverage=1 00:25:22.251 --rc genhtml_function_coverage=1 00:25:22.251 --rc genhtml_legend=1 00:25:22.251 --rc geninfo_all_blocks=1 00:25:22.251 --rc geninfo_unexecuted_blocks=1 00:25:22.251 00:25:22.251 ' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:22.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.251 --rc genhtml_branch_coverage=1 00:25:22.251 --rc genhtml_function_coverage=1 00:25:22.251 --rc genhtml_legend=1 00:25:22.251 --rc geninfo_all_blocks=1 00:25:22.251 --rc geninfo_unexecuted_blocks=1 00:25:22.251 00:25:22.251 ' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.251 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:22.252 ************************************ 00:25:22.252 START TEST nvmf_shutdown_tc1 00:25:22.252 ************************************ 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.252 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.522 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.522 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.522 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.522 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.522 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.523 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:25:27.523 00:25:27.523 --- 10.0.0.2 ping statistics --- 00:25:27.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.523 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:25:27.523 00:25:27.523 --- 10.0.0.1 ping statistics --- 00:25:27.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.523 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=247024 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 247024 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 247024 ']' 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.523 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:27.783 [2024-11-06 12:31:59.149526] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:27.783 [2024-11-06 12:31:59.149590] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.783 [2024-11-06 12:31:59.221978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.783 [2024-11-06 12:31:59.259798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.783 [2024-11-06 12:31:59.259836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.783 [2024-11-06 12:31:59.259842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.783 [2024-11-06 12:31:59.259847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.783 [2024-11-06 12:31:59.259852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.783 [2024-11-06 12:31:59.261468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.783 [2024-11-06 12:31:59.261540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.783 [2024-11-06 12:31:59.261637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:27.783 [2024-11-06 12:31:59.261638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.783 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:27.783 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:25:27.783 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.783 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.783 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.043 [2024-11-06 12:31:59.423654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.043 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.043 Malloc1 00:25:28.043 [2024-11-06 12:31:59.540081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.043 Malloc2 00:25:28.043 Malloc3 00:25:28.043 Malloc4 00:25:28.302 Malloc5 00:25:28.302 Malloc6 00:25:28.302 Malloc7 00:25:28.302 Malloc8 00:25:28.302 Malloc9 00:25:28.302 Malloc10 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=247293 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 247293 /var/tmp/bdevperf.sock 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 247293 ']' 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.561 { 00:25:28.561 "params": { 00:25:28.561 "name": "Nvme$subsystem", 00:25:28.561 "trtype": "$TEST_TRANSPORT", 00:25:28.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.561 "adrfam": "ipv4", 00:25:28.561 "trsvcid": "$NVMF_PORT", 00:25:28.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.561 "hdgst": ${hdgst:-false}, 00:25:28.561 "ddgst": ${ddgst:-false} 00:25:28.561 }, 00:25:28.561 "method": "bdev_nvme_attach_controller" 00:25:28.561 } 00:25:28.561 EOF 00:25:28.561 )") 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.561 { 00:25:28.561 "params": { 00:25:28.561 "name": "Nvme$subsystem", 00:25:28.561 "trtype": "$TEST_TRANSPORT", 00:25:28.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.561 "adrfam": "ipv4", 00:25:28.561 "trsvcid": "$NVMF_PORT", 00:25:28.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.561 "hdgst": ${hdgst:-false}, 00:25:28.561 "ddgst": ${ddgst:-false} 00:25:28.561 }, 00:25:28.561 "method": "bdev_nvme_attach_controller" 00:25:28.561 } 00:25:28.561 EOF 00:25:28.561 )") 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.561 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.561 { 00:25:28.561 "params": { 00:25:28.561 "name": "Nvme$subsystem", 00:25:28.561 "trtype": "$TEST_TRANSPORT", 00:25:28.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.561 "adrfam": "ipv4", 00:25:28.561 "trsvcid": "$NVMF_PORT", 00:25:28.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.561 "hdgst": ${hdgst:-false}, 00:25:28.561 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 [2024-11-06 12:32:00.019174] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:28.562 [2024-11-06 12:32:00.019237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:28.562 { 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme$subsystem", 00:25:28.562 "trtype": "$TEST_TRANSPORT", 00:25:28.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "$NVMF_PORT", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.562 "hdgst": ${hdgst:-false}, 00:25:28.562 "ddgst": ${ddgst:-false} 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 } 00:25:28.562 EOF 00:25:28.562 )") 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:28.562 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme1", 00:25:28.562 "trtype": "tcp", 00:25:28.562 "traddr": "10.0.0.2", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "4420", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.562 "hdgst": false, 00:25:28.562 "ddgst": false 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 },{ 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme2", 00:25:28.562 "trtype": "tcp", 00:25:28.562 "traddr": "10.0.0.2", 00:25:28.562 "adrfam": "ipv4", 00:25:28.562 "trsvcid": "4420", 00:25:28.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:28.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:28.562 "hdgst": false, 00:25:28.562 "ddgst": false 00:25:28.562 }, 00:25:28.562 "method": "bdev_nvme_attach_controller" 00:25:28.562 },{ 00:25:28.562 "params": { 00:25:28.562 "name": "Nvme3", 00:25:28.562 "trtype": "tcp", 00:25:28.562 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme4", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme5", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme6", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme7", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme8", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme9", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 },{ 00:25:28.563 "params": { 00:25:28.563 "name": "Nvme10", 00:25:28.563 "trtype": "tcp", 00:25:28.563 "traddr": "10.0.0.2", 00:25:28.563 "adrfam": "ipv4", 00:25:28.563 "trsvcid": "4420", 00:25:28.563 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:28.563 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:28.563 "hdgst": false, 00:25:28.563 "ddgst": false 00:25:28.563 }, 00:25:28.563 "method": "bdev_nvme_attach_controller" 00:25:28.563 }' 00:25:28.563 [2024-11-06 12:32:00.116098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.563 [2024-11-06 12:32:00.164736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 247293 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:30.470 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:31.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 247293 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 247024 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.644 { 00:25:31.644 "params": { 00:25:31.644 "name": "Nvme$subsystem", 00:25:31.644 "trtype": "$TEST_TRANSPORT", 00:25:31.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.644 "adrfam": "ipv4", 00:25:31.644 "trsvcid": "$NVMF_PORT", 00:25:31.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.644 "hdgst": ${hdgst:-false}, 00:25:31.644 "ddgst": ${ddgst:-false} 00:25:31.644 }, 00:25:31.644 "method": "bdev_nvme_attach_controller" 00:25:31.644 } 00:25:31.644 EOF 00:25:31.644 )") 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.644 { 00:25:31.644 "params": { 00:25:31.644 "name": "Nvme$subsystem", 00:25:31.644 "trtype": "$TEST_TRANSPORT", 00:25:31.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.644 "adrfam": "ipv4", 00:25:31.644 "trsvcid": "$NVMF_PORT", 00:25:31.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.644 "hdgst": ${hdgst:-false}, 00:25:31.644 "ddgst": ${ddgst:-false} 00:25:31.644 }, 00:25:31.644 "method": "bdev_nvme_attach_controller" 00:25:31.644 } 00:25:31.644 EOF 00:25:31.644 )") 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.644 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.644 { 00:25:31.644 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 [2024-11-06 12:32:03.101946] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:31.645 [2024-11-06 12:32:03.101994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247884 ] 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:31.645 { 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme$subsystem", 00:25:31.645 "trtype": "$TEST_TRANSPORT", 00:25:31.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "$NVMF_PORT", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.645 "hdgst": ${hdgst:-false}, 00:25:31.645 "ddgst": ${ddgst:-false} 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 } 00:25:31.645 EOF 00:25:31.645 )") 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:31.645 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme1", 00:25:31.645 "trtype": "tcp", 00:25:31.645 "traddr": "10.0.0.2", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "4420", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.645 "hdgst": false, 00:25:31.645 "ddgst": false 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 },{ 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme2", 00:25:31.645 "trtype": "tcp", 00:25:31.645 "traddr": "10.0.0.2", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "4420", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:31.645 "hdgst": false, 00:25:31.645 "ddgst": false 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 },{ 00:25:31.645 "params": { 00:25:31.645 "name": "Nvme3", 00:25:31.645 "trtype": "tcp", 00:25:31.645 "traddr": "10.0.0.2", 00:25:31.645 "adrfam": "ipv4", 00:25:31.645 "trsvcid": "4420", 00:25:31.645 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:31.645 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:31.645 "hdgst": false, 00:25:31.645 "ddgst": false 00:25:31.645 }, 00:25:31.645 "method": "bdev_nvme_attach_controller" 00:25:31.645 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme4", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme5", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme6", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme7", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme8", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme9", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 },{ 00:25:31.646 "params": { 00:25:31.646 "name": "Nvme10", 00:25:31.646 "trtype": "tcp", 00:25:31.646 "traddr": "10.0.0.2", 00:25:31.646 "adrfam": "ipv4", 00:25:31.646 "trsvcid": "4420", 00:25:31.646 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:31.646 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:31.646 "hdgst": false, 00:25:31.646 "ddgst": false 00:25:31.646 }, 00:25:31.646 "method": "bdev_nvme_attach_controller" 00:25:31.646 }' 00:25:31.646 [2024-11-06 12:32:03.189912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.646 [2024-11-06 12:32:03.238598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.023 Running I/O for 1 seconds... 00:25:34.401 1367.00 IOPS, 85.44 MiB/s 00:25:34.401 Latency(us) 00:25:34.401 [2024-11-06T11:32:06.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.401 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme1n1 : 1.18 162.87 10.18 0.00 0.00 387419.54 17992.61 320292.31 00:25:34.401 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme2n1 : 1.19 161.35 10.08 0.00 0.00 384186.18 16443.58 324105.31 00:25:34.401 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme3n1 : 1.24 205.97 12.87 0.00 0.00 295392.58 19899.11 346983.33 00:25:34.401 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme4n1 : 1.24 209.90 13.12 0.00 0.00 282908.26 5898.24 310759.80 00:25:34.401 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme5n1 : 1.25 205.31 12.83 0.00 0.00 282357.53 17754.30 329824.81 00:25:34.401 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme6n1 : 1.25 213.06 13.32 0.00 0.00 266397.46 8460.10 285975.27 00:25:34.401 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme7n1 : 1.25 204.16 12.76 0.00 0.00 274493.44 17873.45 335544.32 00:25:34.401 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme8n1 : 1.26 203.55 12.72 0.00 0.00 268377.83 10485.76 308853.29 00:25:34.401 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme9n1 : 1.20 160.07 10.00 0.00 0.00 332204.84 21924.77 308853.29 00:25:34.401 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.401 Verification LBA range: start 0x0 length 0x400 00:25:34.401 Nvme10n1 : 1.26 202.64 12.66 0.00 0.00 259048.96 11915.64 331731.32 00:25:34.401 [2024-11-06T11:32:06.016Z] =================================================================================================================== 00:25:34.401 [2024-11-06T11:32:06.016Z] Total : 1928.89 120.56 0.00 0.00 297877.88 5898.24 346983.33 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.661 rmmod nvme_tcp 00:25:34.661 rmmod nvme_fabrics 00:25:34.661 rmmod nvme_keyring 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 247024 ']' 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 247024 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 247024 ']' 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 247024 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 247024 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 247024' 00:25:34.661 killing process with pid 247024 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 247024 00:25:34.661 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 247024 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.230 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:37.135 00:25:37.135 real 0m15.008s 00:25:37.135 user 0m35.314s 00:25:37.135 sys 0m5.466s 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:37.135 ************************************ 00:25:37.135 END TEST nvmf_shutdown_tc1 00:25:37.135 ************************************ 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:37.135 ************************************ 00:25:37.135 START TEST nvmf_shutdown_tc2 00:25:37.135 ************************************ 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:37.135 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.135 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:37.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:37.136 Found net devices under 0000:af:00.0: cvl_0_0 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:37.136 Found net devices under 0000:af:00.1: cvl_0_1 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.136 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.395 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.395 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:25:37.395 00:25:37.395 --- 10.0.0.2 ping statistics --- 00:25:37.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.395 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:25:37.395 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:37.654 00:25:37.654 --- 10.0.0.1 ping statistics --- 00:25:37.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.654 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.654 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=249034 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 249034 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 249034 ']' 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:37.655 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.655 [2024-11-06 12:32:09.125271] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:37.655 [2024-11-06 12:32:09.125334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.655 [2024-11-06 12:32:09.199257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.655 [2024-11-06 12:32:09.240162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.655 [2024-11-06 12:32:09.240194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.655 [2024-11-06 12:32:09.240201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.655 [2024-11-06 12:32:09.240207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.655 [2024-11-06 12:32:09.240211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.655 [2024-11-06 12:32:09.241622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.655 [2024-11-06 12:32:09.241741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.655 [2024-11-06 12:32:09.241841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.655 [2024-11-06 12:32:09.241843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.914 [2024-11-06 12:32:09.387830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.914 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.915 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.915 Malloc1 00:25:37.915 [2024-11-06 12:32:09.492806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.915 Malloc2 00:25:38.174 Malloc3 00:25:38.174 Malloc4 00:25:38.174 Malloc5 00:25:38.174 Malloc6 00:25:38.174 Malloc7 00:25:38.174 Malloc8 00:25:38.436 Malloc9 00:25:38.436 Malloc10 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=249106 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 249106 /var/tmp/bdevperf.sock 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 249106 ']' 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.436 { 00:25:38.436 "params": { 00:25:38.436 "name": "Nvme$subsystem", 00:25:38.436 "trtype": "$TEST_TRANSPORT", 00:25:38.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.436 "adrfam": "ipv4", 00:25:38.436 "trsvcid": "$NVMF_PORT", 00:25:38.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.436 "hdgst": ${hdgst:-false}, 00:25:38.436 "ddgst": ${ddgst:-false} 00:25:38.436 }, 00:25:38.436 "method": "bdev_nvme_attach_controller" 00:25:38.436 } 00:25:38.436 EOF 00:25:38.436 )") 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.436 { 00:25:38.436 "params": { 00:25:38.436 "name": "Nvme$subsystem", 00:25:38.436 "trtype": "$TEST_TRANSPORT", 00:25:38.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.436 "adrfam": "ipv4", 00:25:38.436 "trsvcid": "$NVMF_PORT", 00:25:38.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.436 "hdgst": ${hdgst:-false}, 00:25:38.436 "ddgst": ${ddgst:-false} 00:25:38.436 }, 00:25:38.436 "method": "bdev_nvme_attach_controller" 00:25:38.436 } 00:25:38.436 EOF 00:25:38.436 )") 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.436 { 00:25:38.436 "params": { 00:25:38.436 "name": "Nvme$subsystem", 00:25:38.436 "trtype": "$TEST_TRANSPORT", 00:25:38.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.436 "adrfam": "ipv4", 00:25:38.436 "trsvcid": "$NVMF_PORT", 00:25:38.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.436 "hdgst": ${hdgst:-false}, 00:25:38.436 "ddgst": ${ddgst:-false} 00:25:38.436 }, 00:25:38.436 "method": "bdev_nvme_attach_controller" 00:25:38.436 } 00:25:38.436 EOF 00:25:38.436 )") 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.436 { 00:25:38.436 "params": { 00:25:38.436 "name": "Nvme$subsystem", 00:25:38.436 "trtype": "$TEST_TRANSPORT", 00:25:38.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.436 "adrfam": "ipv4", 00:25:38.436 "trsvcid": "$NVMF_PORT", 00:25:38.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.436 "hdgst": ${hdgst:-false}, 00:25:38.436 "ddgst": ${ddgst:-false} 00:25:38.436 }, 00:25:38.436 "method": "bdev_nvme_attach_controller" 00:25:38.436 } 00:25:38.436 EOF 00:25:38.436 )") 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.436 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.436 { 00:25:38.436 "params": { 00:25:38.436 "name": "Nvme$subsystem", 00:25:38.436 "trtype": "$TEST_TRANSPORT", 00:25:38.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.436 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.437 { 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme$subsystem", 00:25:38.437 "trtype": "$TEST_TRANSPORT", 00:25:38.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.437 { 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme$subsystem", 00:25:38.437 "trtype": "$TEST_TRANSPORT", 00:25:38.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 [2024-11-06 12:32:09.971857] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:38.437 [2024-11-06 12:32:09.971921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249106 ] 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.437 { 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme$subsystem", 00:25:38.437 "trtype": "$TEST_TRANSPORT", 00:25:38.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.437 { 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme$subsystem", 00:25:38.437 "trtype": "$TEST_TRANSPORT", 00:25:38.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:38.437 { 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme$subsystem", 00:25:38.437 "trtype": "$TEST_TRANSPORT", 00:25:38.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "$NVMF_PORT", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.437 "hdgst": ${hdgst:-false}, 00:25:38.437 "ddgst": ${ddgst:-false} 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 } 00:25:38.437 EOF 00:25:38.437 )") 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:38.437 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme1", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme2", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme3", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme4", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme5", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme6", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme7", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme8", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.437 },{ 00:25:38.437 "params": { 00:25:38.437 "name": "Nvme9", 00:25:38.437 "trtype": "tcp", 00:25:38.437 "traddr": "10.0.0.2", 00:25:38.437 "adrfam": "ipv4", 00:25:38.437 "trsvcid": "4420", 00:25:38.437 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:38.437 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:38.437 "hdgst": false, 00:25:38.437 "ddgst": false 00:25:38.437 }, 00:25:38.437 "method": "bdev_nvme_attach_controller" 00:25:38.438 },{ 00:25:38.438 "params": { 00:25:38.438 "name": "Nvme10", 00:25:38.438 "trtype": "tcp", 00:25:38.438 "traddr": "10.0.0.2", 00:25:38.438 "adrfam": "ipv4", 00:25:38.438 "trsvcid": "4420", 00:25:38.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:38.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:38.438 "hdgst": false, 00:25:38.438 "ddgst": false 00:25:38.438 }, 00:25:38.438 "method": "bdev_nvme_attach_controller" 00:25:38.438 }' 00:25:38.697 [2024-11-06 12:32:10.072621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.697 [2024-11-06 12:32:10.124745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.599 Running I/O for 10 seconds... 00:25:40.599 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.599 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:40.599 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:40.599 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.599 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:40.600 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:40.858 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.117 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.376 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.376 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:41.376 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:41.376 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:41.635 1364.00 IOPS, 85.25 MiB/s [2024-11-06T11:32:13.250Z] 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 249106 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 249106 ']' 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 249106 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249106 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249106' 00:25:41.635 killing process with pid 249106 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 249106 00:25:41.635 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 249106 00:25:41.635 Received shutdown signal, test time was about 1.410891 seconds 00:25:41.635 00:25:41.635 Latency(us) 00:25:41.635 [2024-11-06T11:32:13.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.635 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme1n1 : 1.41 181.90 11.37 0.00 0.00 348311.51 27644.28 337450.82 00:25:41.635 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme2n1 : 1.40 183.23 11.45 0.00 0.00 339336.61 26214.40 329824.81 00:25:41.635 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme3n1 : 1.39 184.57 11.54 0.00 0.00 331222.57 17039.36 341263.83 00:25:41.635 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme4n1 : 1.38 185.81 11.61 0.00 0.00 322617.02 22163.08 329824.81 00:25:41.635 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme5n1 : 1.40 185.74 11.61 0.00 0.00 316954.78 3142.75 335544.32 00:25:41.635 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme6n1 : 1.37 187.05 11.69 0.00 0.00 308553.08 18469.24 310759.80 00:25:41.635 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.635 Verification LBA range: start 0x0 length 0x400 00:25:41.635 Nvme7n1 : 1.38 199.08 12.44 0.00 0.00 281482.61 5957.82 335544.32 00:25:41.635 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.636 Verification LBA range: start 0x0 length 0x400 00:25:41.636 Nvme8n1 : 1.41 226.97 14.19 0.00 0.00 245297.06 13762.56 308853.29 00:25:41.636 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.636 Verification LBA range: start 0x0 length 0x400 00:25:41.636 Nvme9n1 : 1.38 191.65 11.98 0.00 0.00 282826.28 8996.31 306946.79 00:25:41.636 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.636 Verification LBA range: start 0x0 length 0x400 00:25:41.636 Nvme10n1 : 1.40 182.43 11.40 0.00 0.00 293960.15 17515.99 335544.32 00:25:41.636 [2024-11-06T11:32:13.251Z] =================================================================================================================== 00:25:41.636 [2024-11-06T11:32:13.251Z] Total : 1908.42 119.28 0.00 0.00 305326.83 3142.75 341263.83 00:25:41.895 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 249034 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.831 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:42.832 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.832 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:42.832 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.832 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.832 rmmod nvme_tcp 00:25:43.091 rmmod nvme_fabrics 00:25:43.091 rmmod nvme_keyring 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 249034 ']' 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 249034 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 249034 ']' 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 249034 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249034 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249034' 00:25:43.091 killing process with pid 249034 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 249034 00:25:43.091 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 249034 00:25:43.350 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.351 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.888 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.888 00:25:45.888 real 0m8.274s 00:25:45.888 user 0m26.235s 00:25:45.888 sys 0m1.520s 00:25:45.888 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:45.888 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.888 ************************************ 00:25:45.888 END TEST nvmf_shutdown_tc2 00:25:45.888 ************************************ 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:45.888 ************************************ 00:25:45.888 START TEST nvmf_shutdown_tc3 00:25:45.888 ************************************ 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.888 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:45.889 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:45.889 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:45.889 Found net devices under 0000:af:00.0: cvl_0_0 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:45.889 Found net devices under 0000:af:00.1: cvl_0_1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:25:45.889 00:25:45.889 --- 10.0.0.2 ping statistics --- 00:25:45.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.889 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:25:45.889 00:25:45.889 --- 10.0.0.1 ping statistics --- 00:25:45.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.889 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.889 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=250544 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 250544 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 250544 ']' 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.890 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.890 [2024-11-06 12:32:17.469284] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:45.890 [2024-11-06 12:32:17.469343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.150 [2024-11-06 12:32:17.541366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.150 [2024-11-06 12:32:17.583746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.150 [2024-11-06 12:32:17.583778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.150 [2024-11-06 12:32:17.583785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.150 [2024-11-06 12:32:17.583790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.150 [2024-11-06 12:32:17.583799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.150 [2024-11-06 12:32:17.585439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.150 [2024-11-06 12:32:17.585543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.150 [2024-11-06 12:32:17.585625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:46.150 [2024-11-06 12:32:17.585626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.150 [2024-11-06 12:32:17.737010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.150 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.409 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.409 Malloc1 00:25:46.409 [2024-11-06 12:32:17.844172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.409 Malloc2 00:25:46.409 Malloc3 00:25:46.409 Malloc4 00:25:46.409 Malloc5 00:25:46.668 Malloc6 00:25:46.668 Malloc7 00:25:46.668 Malloc8 00:25:46.668 Malloc9 00:25:46.668 Malloc10 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=250840 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 250840 /var/tmp/bdevperf.sock 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 250840 ']' 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.668 { 00:25:46.668 "params": { 00:25:46.668 "name": "Nvme$subsystem", 00:25:46.668 "trtype": "$TEST_TRANSPORT", 00:25:46.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.668 "adrfam": "ipv4", 00:25:46.668 "trsvcid": "$NVMF_PORT", 00:25:46.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.668 "hdgst": ${hdgst:-false}, 00:25:46.668 "ddgst": ${ddgst:-false} 00:25:46.668 }, 00:25:46.668 "method": "bdev_nvme_attach_controller" 00:25:46.668 } 00:25:46.668 EOF 00:25:46.668 )") 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.668 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.668 { 00:25:46.668 "params": { 00:25:46.668 "name": "Nvme$subsystem", 00:25:46.668 "trtype": "$TEST_TRANSPORT", 00:25:46.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.668 "adrfam": "ipv4", 00:25:46.668 "trsvcid": "$NVMF_PORT", 00:25:46.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.668 "hdgst": ${hdgst:-false}, 00:25:46.668 "ddgst": ${ddgst:-false} 00:25:46.668 }, 00:25:46.669 "method": "bdev_nvme_attach_controller" 00:25:46.669 } 00:25:46.669 EOF 00:25:46.669 )") 00:25:46.669 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 [2024-11-06 12:32:18.316344] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:46.929 [2024-11-06 12:32:18.316387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250840 ] 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.929 { 00:25:46.929 "params": { 00:25:46.929 "name": "Nvme$subsystem", 00:25:46.929 "trtype": "$TEST_TRANSPORT", 00:25:46.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.929 "adrfam": "ipv4", 00:25:46.929 "trsvcid": "$NVMF_PORT", 00:25:46.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.929 "hdgst": ${hdgst:-false}, 00:25:46.929 "ddgst": ${ddgst:-false} 00:25:46.929 }, 00:25:46.929 "method": "bdev_nvme_attach_controller" 00:25:46.929 } 00:25:46.929 EOF 00:25:46.929 )") 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.929 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.930 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.930 { 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme$subsystem", 00:25:46.930 "trtype": "$TEST_TRANSPORT", 00:25:46.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "$NVMF_PORT", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.930 "hdgst": ${hdgst:-false}, 00:25:46.930 "ddgst": ${ddgst:-false} 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 } 00:25:46.930 EOF 00:25:46.930 )") 00:25:46.930 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.930 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:46.930 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:46.930 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme1", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme2", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme3", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme4", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme5", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme6", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme7", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme8", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme9", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 },{ 00:25:46.930 "params": { 00:25:46.930 "name": "Nvme10", 00:25:46.930 "trtype": "tcp", 00:25:46.930 "traddr": "10.0.0.2", 00:25:46.930 "adrfam": "ipv4", 00:25:46.930 "trsvcid": "4420", 00:25:46.930 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:46.930 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:46.930 "hdgst": false, 00:25:46.930 "ddgst": false 00:25:46.930 }, 00:25:46.930 "method": "bdev_nvme_attach_controller" 00:25:46.930 }' 00:25:46.930 [2024-11-06 12:32:18.398811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.930 [2024-11-06 12:32:18.449064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.308 Running I/O for 10 seconds... 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:48.877 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:49.136 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:49.394 1367.00 IOPS, 85.44 MiB/s [2024-11-06T11:32:21.009Z] 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:49.394 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:49.394 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.394 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.394 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.394 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 250544 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 250544 ']' 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 250544 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 250544 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 250544' 00:25:49.668 killing process with pid 250544 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 250544 00:25:49.668 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 250544 00:25:49.668 [2024-11-06 12:32:21.119090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.668 [2024-11-06 12:32:21.119194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.119504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6802f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.121980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.669 [2024-11-06 12:32:21.122021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.669 [2024-11-06 12:32:21.122035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.669 [2024-11-06 12:32:21.122046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.669 [2024-11-06 12:32:21.122057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.669 [2024-11-06 12:32:21.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.669 [2024-11-06 12:32:21.122078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.669 [2024-11-06 12:32:21.122088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.669 [2024-11-06 12:32:21.122098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d46f0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.123697] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.669 [2024-11-06 12:32:21.125360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.669 [2024-11-06 12:32:21.125479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.125727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad6e0 is same with the state(6) to be set 00:25:49.670 [2024-11-06 12:32:21.127046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.670 [2024-11-06 12:32:21.127422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.670 [2024-11-06 12:32:21.127432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.127982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.127994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.671 [2024-11-06 12:32:21.128241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.671 [2024-11-06 12:32:21.128251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.128518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.672 [2024-11-06 12:32:21.128528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.672 [2024-11-06 12:32:21.130472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.672 [2024-11-06 12:32:21.130541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.672 [2024-11-06 12:32:21.131815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.672 [2024-11-06 12:32:21.131879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with t[2024-11-06 12:32:21.131899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641330 withe state(6) to be set 00:25:49.672 h addr=10.0.0.2, port=4420 00:25:49.672 [2024-11-06 12:32:21.131910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with t[2024-11-06 12:32:21.131915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641330 is same he state(6) to be set 00:25:49.672 with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.131995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.672 [2024-11-06 12:32:21.132114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6807c0 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.132364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.673 [2024-11-06 12:32:21.132441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d46f0 (9): Bad file descriptor 00:25:49.673 [2024-11-06 12:32:21.132948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:49.673 [2024-11-06 12:32:21.132971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:49.673 [2024-11-06 12:32:21.132983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.673 [2024-11-06 12:32:21.132995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:49.673 [2024-11-06 12:32:21.133648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.673 [2024-11-06 12:32:21.133924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.133998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680c90 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.134723] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.674 [2024-11-06 12:32:21.135180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.674 [2024-11-06 12:32:21.135563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681180 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136561] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.675 [2024-11-06 12:32:21.136573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.136628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681500 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.137745] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.675 [2024-11-06 12:32:21.138067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.675 [2024-11-06 12:32:21.138208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.138454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6819d0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.140859] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.676 [2024-11-06 12:32:21.141843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.676 [2024-11-06 12:32:21.143397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.676 [2024-11-06 12:32:21.143425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641330 with addr=10.0.0.2, port=4420 00:25:49.676 [2024-11-06 12:32:21.143438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641330 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.143476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fec50 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.143648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ffff0 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.143779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2270 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.143898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.676 [2024-11-06 12:32:21.143973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.676 [2024-11-06 12:32:21.143983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d4290 is same with the state(6) to be set 00:25:49.676 [2024-11-06 12:32:21.144177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.677 [2024-11-06 12:32:21.144229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.144995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.677 [2024-11-06 12:32:21.145142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.677 [2024-11-06 12:32:21.145154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.678 [2024-11-06 12:32:21.145703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.678 [2024-11-06 12:32:21.145714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d8910 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.146973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.146988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.146995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.678 [2024-11-06 12:32:21.147099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682370 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:49.679 [2024-11-06 12:32:21.147644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:49.679 [2024-11-06 12:32:21.147655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:49.679 [2024-11-06 12:32:21.147666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.679 [2024-11-06 12:32:21.147676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:49.679 [2024-11-06 12:32:21.147874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682840 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682840 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682840 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.147907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682840 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.148193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.679 [2024-11-06 12:32:21.148216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d46f0 with addr=10.0.0.2, port=4420 00:25:49.679 [2024-11-06 12:32:21.148227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d46f0 is same with the state(6) to be set 00:25:49.679 [2024-11-06 12:32:21.148609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.679 [2024-11-06 12:32:21.148943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.679 [2024-11-06 12:32:21.148954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.148966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.148976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.148990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.680 [2024-11-06 12:32:21.149870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.680 [2024-11-06 12:32:21.149880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.149893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.149903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.149916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.149926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.149938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.149949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.149972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.149984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.149993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.150016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.150029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.150039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.150053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.150063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.150076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.681 [2024-11-06 12:32:21.150086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.150098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d9580 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.150242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d46f0 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.151722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:49.681 [2024-11-06 12:32:21.151777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f4ef0 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.151793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:49.681 [2024-11-06 12:32:21.151808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:49.681 [2024-11-06 12:32:21.151819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:49.681 [2024-11-06 12:32:21.151829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:49.681 [2024-11-06 12:32:21.152357] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:49.681 [2024-11-06 12:32:21.152494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.681 [2024-11-06 12:32:21.152516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f4ef0 with addr=10.0.0.2, port=4420 00:25:49.681 [2024-11-06 12:32:21.152528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f4ef0 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.152592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.681 [2024-11-06 12:32:21.152618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f4ef0 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.152833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.681 [2024-11-06 12:32:21.152850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641330 with addr=10.0.0.2, port=4420 00:25:49.681 [2024-11-06 12:32:21.152862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641330 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.152873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:49.681 [2024-11-06 12:32:21.152882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:49.681 [2024-11-06 12:32:21.152893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:49.681 [2024-11-06 12:32:21.152904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:49.681 [2024-11-06 12:32:21.152947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.152987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:49.681 [2024-11-06 12:32:21.152997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:49.681 [2024-11-06 12:32:21.153007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.681 [2024-11-06 12:32:21.153016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:49.681 [2024-11-06 12:32:21.153169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fec50 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.153212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26118e0 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.153342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263dfe0 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.153475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.681 [2024-11-06 12:32:21.153557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.681 [2024-11-06 12:32:21.153566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8610 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.153588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ffff0 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.153609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2270 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.153630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d4290 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.157741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:49.681 [2024-11-06 12:32:21.157984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.681 [2024-11-06 12:32:21.158003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d46f0 with addr=10.0.0.2, port=4420 00:25:49.681 [2024-11-06 12:32:21.158019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d46f0 is same with the state(6) to be set 00:25:49.681 [2024-11-06 12:32:21.158063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d46f0 (9): Bad file descriptor 00:25:49.681 [2024-11-06 12:32:21.158105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:49.681 [2024-11-06 12:32:21.158116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:49.681 [2024-11-06 12:32:21.158127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:49.682 [2024-11-06 12:32:21.158137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:49.682 [2024-11-06 12:32:21.162081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:49.682 [2024-11-06 12:32:21.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.682 [2024-11-06 12:32:21.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f4ef0 with addr=10.0.0.2, port=4420 00:25:49.682 [2024-11-06 12:32:21.162363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f4ef0 is same with the state(6) to be set 00:25:49.682 [2024-11-06 12:32:21.162406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f4ef0 (9): Bad file descriptor 00:25:49.682 [2024-11-06 12:32:21.162449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:49.682 [2024-11-06 12:32:21.162471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:49.682 [2024-11-06 12:32:21.162485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:49.682 [2024-11-06 12:32:21.162496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:49.682 [2024-11-06 12:32:21.162717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.682 [2024-11-06 12:32:21.163016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.682 [2024-11-06 12:32:21.163036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641330 with addr=10.0.0.2, port=4420 00:25:49.682 [2024-11-06 12:32:21.163048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641330 is same with the state(6) to be set 00:25:49.682 [2024-11-06 12:32:21.163092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.682 [2024-11-06 12:32:21.163135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:49.682 [2024-11-06 12:32:21.163148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:49.682 [2024-11-06 12:32:21.163160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.682 [2024-11-06 12:32:21.163170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:49.682 [2024-11-06 12:32:21.163221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26118e0 (9): Bad file descriptor 00:25:49.682 [2024-11-06 12:32:21.163248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263dfe0 (9): Bad file descriptor 00:25:49.682 [2024-11-06 12:32:21.163271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8610 (9): Bad file descriptor 00:25:49.682 [2024-11-06 12:32:21.163400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.163982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.163994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.682 [2024-11-06 12:32:21.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.682 [2024-11-06 12:32:21.164018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.683 [2024-11-06 12:32:21.164948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9ad0 is same with the state(6) to be set 00:25:49.683 [2024-11-06 12:32:21.166442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.683 [2024-11-06 12:32:21.166468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.166980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.166991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.684 [2024-11-06 12:32:21.167416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.684 [2024-11-06 12:32:21.167426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.167976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.167987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.168001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.168013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.168024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c9800 is same with the state(6) to be set 00:25:49.685 [2024-11-06 12:32:21.169517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.685 [2024-11-06 12:32:21.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.685 [2024-11-06 12:32:21.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.169923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.169933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.169945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.169955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.169970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.169995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.686 [2024-11-06 12:32:21.170774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.686 [2024-11-06 12:32:21.170785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.171001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.171016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.171027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.171040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.171050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.171062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.171073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.171087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d6b20 is same with the state(6) to be set 00:25:49.687 [2024-11-06 12:32:21.172585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.172977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.172990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.687 [2024-11-06 12:32:21.173230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.687 [2024-11-06 12:32:21.173243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.173980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.173990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.688 [2024-11-06 12:32:21.174159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.688 [2024-11-06 12:32:21.174170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7ff0 is same with the state(6) to be set 00:25:49.688 [2024-11-06 12:32:21.175630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:49.688 [2024-11-06 12:32:21.175657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:49.688 [2024-11-06 12:32:21.175671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:49.688 [2024-11-06 12:32:21.175684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:49.688 [2024-11-06 12:32:21.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.689 [2024-11-06 12:32:21.176099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d4290 with addr=10.0.0.2, port=4420 00:25:49.689 [2024-11-06 12:32:21.176112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d4290 is same with the state(6) to be set 00:25:49.689 [2024-11-06 12:32:21.176356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.689 [2024-11-06 12:32:21.176373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2270 with addr=10.0.0.2, port=4420 00:25:49.689 [2024-11-06 12:32:21.176384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2270 is same with the state(6) to be set 00:25:49.689 [2024-11-06 12:32:21.176567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.689 [2024-11-06 12:32:21.176584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ffff0 with addr=10.0.0.2, port=4420 00:25:49.689 [2024-11-06 12:32:21.176595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ffff0 is same with the state(6) to be set 00:25:49.689 [2024-11-06 12:32:21.176718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.689 [2024-11-06 12:32:21.176738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fec50 with addr=10.0.0.2, port=4420 00:25:49.689 [2024-11-06 12:32:21.176748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fec50 is same with the state(6) to be set 00:25:49.689 [2024-11-06 12:32:21.178080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.689 [2024-11-06 12:32:21.178919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-06 12:32:21.178931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.178955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.178967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.178977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.178989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.690 [2024-11-06 12:32:21.179662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-06 12:32:21.179673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25dc0a0 is same with the state(6) to be set 00:25:49.690 [2024-11-06 12:32:21.181130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:49.690 [2024-11-06 12:32:21.181151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:49.690 [2024-11-06 12:32:21.181164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.690 [2024-11-06 12:32:21.181179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:49.690 [2024-11-06 12:32:21.181192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:49.690 task offset: 17024 on job bdev=Nvme10n1 fails 00:25:49.690 00:25:49.690 Latency(us) 00:25:49.690 [2024-11-06T11:32:21.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme1n1 ended in about 1.27 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.690 Nvme1n1 : 1.27 105.21 6.58 50.25 0.00 407536.73 20137.43 345076.83 00:25:49.690 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme2n1 ended in about 1.29 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.690 Nvme2n1 : 1.29 148.51 9.28 49.50 0.00 313904.87 18945.86 333637.82 00:25:49.690 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme3n1 ended in about 1.30 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.690 Nvme3n1 : 1.30 148.16 9.26 49.39 0.00 308724.83 18945.86 308853.29 00:25:49.690 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme4n1 ended in about 1.30 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.690 Nvme4n1 : 1.30 147.81 9.24 49.27 0.00 303597.15 15013.70 339357.32 00:25:49.690 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme5n1 ended in about 1.30 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.690 Nvme5n1 : 1.30 147.46 9.22 49.15 0.00 298437.59 22282.24 335544.32 00:25:49.690 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.690 Job: Nvme6n1 ended in about 1.28 seconds with error 00:25:49.690 Verification LBA range: start 0x0 length 0x400 00:25:49.691 Nvme6n1 : 1.28 150.21 9.39 50.07 0.00 286580.60 22639.71 322198.81 00:25:49.691 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.691 Verification LBA range: start 0x0 length 0x400 00:25:49.691 Nvme7n1 : 1.27 202.11 12.63 0.00 0.00 277705.08 19541.64 310759.80 00:25:49.691 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.691 Job: Nvme8n1 ended in about 1.31 seconds with error 00:25:49.691 Verification LBA range: start 0x0 length 0x400 00:25:49.691 Nvme8n1 : 1.31 146.84 9.18 48.95 0.00 282078.02 15371.17 335544.32 00:25:49.691 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.691 Verification LBA range: start 0x0 length 0x400 00:25:49.691 Nvme9n1 : 1.27 205.56 12.85 0.00 0.00 260905.47 6106.76 282162.27 00:25:49.691 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:49.691 Job: Nvme10n1 ended in about 1.26 seconds with error 00:25:49.691 Verification LBA range: start 0x0 length 0x400 00:25:49.691 Nvme10n1 : 1.26 105.80 6.61 50.91 0.00 334727.84 4706.68 333637.82 00:25:49.691 [2024-11-06T11:32:21.306Z] =================================================================================================================== 00:25:49.691 [2024-11-06T11:32:21.306Z] Total : 1507.65 94.23 397.48 0.00 304294.60 4706.68 345076.83 00:25:49.691 [2024-11-06 12:32:21.230482] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:49.691 [2024-11-06 12:32:21.230541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:49.691 [2024-11-06 12:32:21.230634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d4290 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.230656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2270 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.230670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ffff0 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.230684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fec50 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.231142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.231177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d46f0 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.231203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d46f0 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.231473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.231498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f4ef0 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.231517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f4ef0 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.231678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.231701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2641330 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.231716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2641330 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.231967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.231990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x263dfe0 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.232005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x263dfe0 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.232310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.232333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e8610 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.232348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8610 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.232540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.691 [2024-11-06 12:32:21.232564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26118e0 with addr=10.0.0.2, port=4420 00:25:49.691 [2024-11-06 12:32:21.232580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26118e0 is same with the state(6) to be set 00:25:49.691 [2024-11-06 12:32:21.232597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.232618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.232633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.232650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.232667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.232680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.232693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.232706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.232721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.232734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.232748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.232760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.232774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.232787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.232803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.232821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.233492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d46f0 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f4ef0 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2641330 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263dfe0 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8610 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26118e0 (9): Bad file descriptor 00:25:49.691 [2024-11-06 12:32:21.233670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:49.691 [2024-11-06 12:32:21.233692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:49.691 [2024-11-06 12:32:21.233709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:49.691 [2024-11-06 12:32:21.233726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:49.691 [2024-11-06 12:32:21.233780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.233795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.233808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.233828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.233855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.233868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.233882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.233895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.233910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.233924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.233938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.233950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.233964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:49.691 [2024-11-06 12:32:21.233977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:49.691 [2024-11-06 12:32:21.233989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:49.691 [2024-11-06 12:32:21.234002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:49.691 [2024-11-06 12:32:21.234018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.234030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.234044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.234056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:49.692 [2024-11-06 12:32:21.234070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.234084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.234098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.234111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:49.692 [2024-11-06 12:32:21.234374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.692 [2024-11-06 12:32:21.234401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25fec50 with addr=10.0.0.2, port=4420 00:25:49.692 [2024-11-06 12:32:21.234416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fec50 is same with the state(6) to be set 00:25:49.692 [2024-11-06 12:32:21.234592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.692 [2024-11-06 12:32:21.234614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25ffff0 with addr=10.0.0.2, port=4420 00:25:49.692 [2024-11-06 12:32:21.234629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ffff0 is same with the state(6) to be set 00:25:49.692 [2024-11-06 12:32:21.234885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.692 [2024-11-06 12:32:21.234906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2270 with addr=10.0.0.2, port=4420 00:25:49.692 [2024-11-06 12:32:21.234920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2270 is same with the state(6) to be set 00:25:49.692 [2024-11-06 12:32:21.235116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.692 [2024-11-06 12:32:21.235138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d4290 with addr=10.0.0.2, port=4420 00:25:49.692 [2024-11-06 12:32:21.235155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d4290 is same with the state(6) to be set 00:25:49.692 [2024-11-06 12:32:21.235208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25fec50 (9): Bad file descriptor 00:25:49.692 [2024-11-06 12:32:21.235230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ffff0 (9): Bad file descriptor 00:25:49.692 [2024-11-06 12:32:21.235249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2270 (9): Bad file descriptor 00:25:49.692 [2024-11-06 12:32:21.235267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d4290 (9): Bad file descriptor 00:25:49.692 [2024-11-06 12:32:21.235331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.235349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.235364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.235377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:49.692 [2024-11-06 12:32:21.235392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.235404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.235417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.235431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:49.692 [2024-11-06 12:32:21.235445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.235473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.235487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.235500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:49.692 [2024-11-06 12:32:21.235515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:49.692 [2024-11-06 12:32:21.235527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:49.692 [2024-11-06 12:32:21.235540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:49.692 [2024-11-06 12:32:21.235553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:49.951 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 250840 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 250840 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 250840 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.329 rmmod nvme_tcp 00:25:51.329 rmmod nvme_fabrics 00:25:51.329 rmmod nvme_keyring 00:25:51.329 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 250544 ']' 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 250544 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 250544 ']' 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 250544 00:25:51.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (250544) - No such process 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 250544 is not found' 00:25:51.330 Process with pid 250544 is not found 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.330 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.233 00:25:53.233 real 0m7.598s 00:25:53.233 user 0m18.796s 00:25:53.233 sys 0m1.460s 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:53.233 ************************************ 00:25:53.233 END TEST nvmf_shutdown_tc3 00:25:53.233 ************************************ 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:53.233 ************************************ 00:25:53.233 START TEST nvmf_shutdown_tc4 00:25:53.233 ************************************ 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:53.233 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:53.234 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:53.234 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:53.234 Found net devices under 0000:af:00.0: cvl_0_0 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:53.234 Found net devices under 0000:af:00.1: cvl_0_1 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.234 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.493 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:25:53.493 00:25:53.493 --- 10.0.0.2 ping statistics --- 00:25:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.493 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:25:53.493 00:25:53.493 --- 10.0.0.1 ping statistics --- 00:25:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.493 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=252080 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 252080 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 252080 ']' 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:53.493 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:53.752 [2024-11-06 12:32:25.125902] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:25:53.752 [2024-11-06 12:32:25.125943] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.752 [2024-11-06 12:32:25.187700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.752 [2024-11-06 12:32:25.225679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.752 [2024-11-06 12:32:25.225719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.752 [2024-11-06 12:32:25.225726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.752 [2024-11-06 12:32:25.225731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.752 [2024-11-06 12:32:25.225735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.752 [2024-11-06 12:32:25.227380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.752 [2024-11-06 12:32:25.227489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.752 [2024-11-06 12:32:25.227584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.752 [2024-11-06 12:32:25.227586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.752 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:53.752 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:25:53.752 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.752 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.752 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.010 [2024-11-06 12:32:25.401779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.010 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.010 Malloc1 00:25:54.010 [2024-11-06 12:32:25.508942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.010 Malloc2 00:25:54.010 Malloc3 00:25:54.010 Malloc4 00:25:54.266 Malloc5 00:25:54.266 Malloc6 00:25:54.266 Malloc7 00:25:54.266 Malloc8 00:25:54.266 Malloc9 00:25:54.266 Malloc10 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=252327 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:54.524 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:54.524 [2024-11-06 12:32:26.020285] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 252080 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 252080 ']' 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 252080 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:59.792 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 252080 00:25:59.792 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:59.792 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:59.792 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 252080' 00:25:59.792 killing process with pid 252080 00:25:59.792 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 252080 00:25:59.792 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 252080 00:25:59.792 [2024-11-06 12:32:31.027579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.027677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b560 is same with the state(6) to be set 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 [2024-11-06 12:32:31.028126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139ba50 is same with the state(6) to be set 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 [2024-11-06 12:32:31.028488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 starting I/O failed: -6 00:25:59.792 [2024-11-06 12:32:31.028521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 [2024-11-06 12:32:31.028535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 [2024-11-06 12:32:31.028560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 [2024-11-06 12:32:31.028567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf20 is same with the state(6) to be set 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 Write completed with error (sct=0, sc=8) 00:25:59.792 starting I/O failed: -6 00:25:59.792 [2024-11-06 12:32:31.028733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.792 NVMe io qpair process completion error 00:25:59.792 [2024-11-06 12:32:31.028816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 [2024-11-06 12:32:31.028890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b090 is same with the state(6) to be set 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 [2024-11-06 12:32:31.029900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 [2024-11-06 12:32:31.030981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 [2024-11-06 12:32:31.032289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.793 Write completed with error (sct=0, sc=8) 00:25:59.793 starting I/O failed: -6 00:25:59.794 [2024-11-06 12:32:31.034287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.794 NVMe io qpair process completion error 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 [2024-11-06 12:32:31.035611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 [2024-11-06 12:32:31.036798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.794 starting I/O failed: -6 00:25:59.794 starting I/O failed: -6 00:25:59.794 starting I/O failed: -6 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 [2024-11-06 12:32:31.038322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 [2024-11-06 12:32:31.040767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.794 NVMe io qpair process completion error 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 [2024-11-06 12:32:31.041903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.794 starting I/O failed: -6 00:25:59.794 starting I/O failed: -6 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 starting I/O failed: -6 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.794 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 [2024-11-06 12:32:31.043166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 [2024-11-06 12:32:31.044496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 [2024-11-06 12:32:31.046935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.795 NVMe io qpair process completion error 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 [2024-11-06 12:32:31.048133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 [2024-11-06 12:32:31.049212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 starting I/O failed: -6 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.795 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 [2024-11-06 12:32:31.050510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 [2024-11-06 12:32:31.058557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.796 NVMe io qpair process completion error 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 [2024-11-06 12:32:31.059868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 [2024-11-06 12:32:31.061018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.796 starting I/O failed: -6 00:25:59.796 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 [2024-11-06 12:32:31.062456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 [2024-11-06 12:32:31.064835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.797 NVMe io qpair process completion error 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 [2024-11-06 12:32:31.066169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 [2024-11-06 12:32:31.067323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 [2024-11-06 12:32:31.068721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.797 Write completed with error (sct=0, sc=8) 00:25:59.797 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 [2024-11-06 12:32:31.078398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.798 NVMe io qpair process completion error 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 [2024-11-06 12:32:31.079984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 [2024-11-06 12:32:31.081385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 [2024-11-06 12:32:31.083054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.798 Write completed with error (sct=0, sc=8) 00:25:59.798 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 [2024-11-06 12:32:31.085522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.799 NVMe io qpair process completion error 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 [2024-11-06 12:32:31.087488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.799 starting I/O failed: -6 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 [2024-11-06 12:32:31.088917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 [2024-11-06 12:32:31.090492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 [2024-11-06 12:32:31.097633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.799 NVMe io qpair process completion error 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 [2024-11-06 12:32:31.099157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.799 starting I/O failed: -6 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.799 starting I/O failed: -6 00:25:59.799 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 [2024-11-06 12:32:31.100267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 [2024-11-06 12:32:31.101630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 starting I/O failed: -6 00:25:59.800 [2024-11-06 12:32:31.104123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.800 NVMe io qpair process completion error 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.800 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 [2024-11-06 12:32:31.106493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 Write completed with error (sct=0, sc=8) 00:25:59.801 [2024-11-06 12:32:31.118099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:59.801 NVMe io qpair process completion error 00:25:59.801 Initializing NVMe Controllers 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:59.801 Controller IO queue size 128, less than required. 00:25:59.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:59.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:59.801 Initialization complete. Launching workers. 00:25:59.801 ======================================================== 00:25:59.801 Latency(us) 00:25:59.801 Device Information : IOPS MiB/s Average min max 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1588.37 68.25 80610.00 1162.59 168009.59 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1593.38 68.47 80407.17 1163.97 171702.61 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1595.99 68.58 80394.28 907.87 180220.35 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1589.46 68.30 80654.02 780.78 182795.72 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1539.62 66.16 81926.74 1078.59 144075.32 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1582.72 68.01 79720.25 1037.34 141603.65 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1535.27 65.97 82217.27 879.03 144498.32 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1587.29 68.20 79556.52 931.09 142450.05 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1597.95 68.66 79149.36 1166.89 144242.89 00:25:59.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1590.77 68.35 79539.81 994.13 144536.04 00:25:59.801 ======================================================== 00:25:59.801 Total : 15800.83 678.94 80406.25 780.78 182795.72 00:25:59.801 00:25:59.801 [2024-11-06 12:32:31.121726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a9f0 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.121804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1b9e0 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.121859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1b380 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.121914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a390 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.121967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1c540 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.122019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a060 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.122070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1c360 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.122121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1b6b0 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.122174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1a6c0 is same with the state(6) to be set 00:25:59.801 [2024-11-06 12:32:31.122225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1b050 is same with the state(6) to be set 00:25:59.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:59.801 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 252327 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 252327 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 252327 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.177 rmmod nvme_tcp 00:26:01.177 rmmod nvme_fabrics 00:26:01.177 rmmod nvme_keyring 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 252080 ']' 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 252080 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 252080 ']' 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 252080 00:26:01.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (252080) - No such process 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 252080 is not found' 00:26:01.177 Process with pid 252080 is not found 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.080 00:26:03.080 real 0m9.783s 00:26:03.080 user 0m26.033s 00:26:03.080 sys 0m4.314s 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:03.080 ************************************ 00:26:03.080 END TEST nvmf_shutdown_tc4 00:26:03.080 ************************************ 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:03.080 00:26:03.080 real 0m41.133s 00:26:03.080 user 1m46.581s 00:26:03.080 sys 0m13.060s 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:03.080 ************************************ 00:26:03.080 END TEST nvmf_shutdown 00:26:03.080 ************************************ 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:03.080 ************************************ 00:26:03.080 START TEST nvmf_nsid 00:26:03.080 ************************************ 00:26:03.080 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:03.339 * Looking for test storage... 00:26:03.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.339 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.340 --rc genhtml_branch_coverage=1 00:26:03.340 --rc genhtml_function_coverage=1 00:26:03.340 --rc genhtml_legend=1 00:26:03.340 --rc geninfo_all_blocks=1 00:26:03.340 --rc geninfo_unexecuted_blocks=1 00:26:03.340 00:26:03.340 ' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.340 --rc genhtml_branch_coverage=1 00:26:03.340 --rc genhtml_function_coverage=1 00:26:03.340 --rc genhtml_legend=1 00:26:03.340 --rc geninfo_all_blocks=1 00:26:03.340 --rc geninfo_unexecuted_blocks=1 00:26:03.340 00:26:03.340 ' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.340 --rc genhtml_branch_coverage=1 00:26:03.340 --rc genhtml_function_coverage=1 00:26:03.340 --rc genhtml_legend=1 00:26:03.340 --rc geninfo_all_blocks=1 00:26:03.340 --rc geninfo_unexecuted_blocks=1 00:26:03.340 00:26:03.340 ' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.340 --rc genhtml_branch_coverage=1 00:26:03.340 --rc genhtml_function_coverage=1 00:26:03.340 --rc genhtml_legend=1 00:26:03.340 --rc geninfo_all_blocks=1 00:26:03.340 --rc geninfo_unexecuted_blocks=1 00:26:03.340 00:26:03.340 ' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:03.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:03.340 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:08.611 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:08.611 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:08.612 Found net devices under 0000:af:00.0: cvl_0_0 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:08.612 Found net devices under 0000:af:00.1: cvl_0_1 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.612 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.870 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:26:08.871 00:26:08.871 --- 10.0.0.2 ping statistics --- 00:26:08.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.871 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:26:08.871 00:26:08.871 --- 10.0.0.1 ping statistics --- 00:26:08.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.871 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=257075 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 257075 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 257075 ']' 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:08.871 [2024-11-06 12:32:40.455270] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:08.871 [2024-11-06 12:32:40.455312] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.134 [2024-11-06 12:32:40.540578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.134 [2024-11-06 12:32:40.587648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.134 [2024-11-06 12:32:40.587685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.134 [2024-11-06 12:32:40.587696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.134 [2024-11-06 12:32:40.587705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.134 [2024-11-06 12:32:40.587714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.134 [2024-11-06 12:32:40.588425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=257126 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8828ea30-44da-46f2-a582-bfeee020c500 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1ef3ba33-cb3a-4246-b385-3175f5dfc716 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=84d9264d-efa1-4585-b16f-2c0bf9e76dea 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.134 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:09.437 null0 00:26:09.437 null1 00:26:09.437 null2 00:26:09.437 [2024-11-06 12:32:40.778020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.437 [2024-11-06 12:32:40.780950] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:09.438 [2024-11-06 12:32:40.781008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257126 ] 00:26:09.438 [2024-11-06 12:32:40.802253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 257126 /var/tmp/tgt2.sock 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 257126 ']' 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:09.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:09.438 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:09.438 [2024-11-06 12:32:40.847537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.438 [2024-11-06 12:32:40.888420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.788 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:09.788 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:26:09.788 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:10.089 [2024-11-06 12:32:41.519177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.089 [2024-11-06 12:32:41.535291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:10.089 nvme0n1 nvme0n2 00:26:10.089 nvme1n1 00:26:10.089 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:10.089 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:10.089 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:26:11.465 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8828ea30-44da-46f2-a582-bfeee020c500 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8828ea3044da46f2a582bfeee020c500 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8828EA3044DA46F2A582BFEEE020C500 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8828EA3044DA46F2A582BFEEE020C500 == \8\8\2\8\E\A\3\0\4\4\D\A\4\6\F\2\A\5\8\2\B\F\E\E\E\0\2\0\C\5\0\0 ]] 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1ef3ba33-cb3a-4246-b385-3175f5dfc716 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1ef3ba33cb3a4246b3853175f5dfc716 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1EF3BA33CB3A4246B3853175F5DFC716 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1EF3BA33CB3A4246B3853175F5DFC716 == \1\E\F\3\B\A\3\3\C\B\3\A\4\2\4\6\B\3\8\5\3\1\7\5\F\5\D\F\C\7\1\6 ]] 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 84d9264d-efa1-4585-b16f-2c0bf9e76dea 00:26:12.401 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:12.401 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:12.401 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:12.401 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:12.401 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=84d9264defa14585b16f2c0bf9e76dea 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 84D9264DEFA14585B16F2C0BF9E76DEA 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 84D9264DEFA14585B16F2C0BF9E76DEA == \8\4\D\9\2\6\4\D\E\F\A\1\4\5\8\5\B\1\6\F\2\C\0\B\F\9\E\7\6\D\E\A ]] 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 257126 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 257126 ']' 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 257126 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:12.659 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257126 00:26:12.918 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:12.918 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:12.918 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257126' 00:26:12.918 killing process with pid 257126 00:26:12.918 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 257126 00:26:12.918 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 257126 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.176 rmmod nvme_tcp 00:26:13.176 rmmod nvme_fabrics 00:26:13.176 rmmod nvme_keyring 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 257075 ']' 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 257075 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 257075 ']' 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 257075 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257075 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257075' 00:26:13.176 killing process with pid 257075 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 257075 00:26:13.176 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 257075 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.435 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.965 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.965 00:26:15.965 real 0m12.330s 00:26:15.965 user 0m10.397s 00:26:15.965 sys 0m5.246s 00:26:15.966 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:15.966 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:15.966 ************************************ 00:26:15.966 END TEST nvmf_nsid 00:26:15.966 ************************************ 00:26:15.966 12:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:15.966 00:26:15.966 real 12m52.807s 00:26:15.966 user 28m32.053s 00:26:15.966 sys 3m37.828s 00:26:15.966 12:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:15.966 12:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.966 ************************************ 00:26:15.966 END TEST nvmf_target_extra 00:26:15.966 ************************************ 00:26:15.966 12:32:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:15.966 12:32:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:15.966 12:32:47 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:15.966 12:32:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:15.966 ************************************ 00:26:15.966 START TEST nvmf_host 00:26:15.966 ************************************ 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:15.966 * Looking for test storage... 00:26:15.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:15.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.966 --rc genhtml_branch_coverage=1 00:26:15.966 --rc genhtml_function_coverage=1 00:26:15.966 --rc genhtml_legend=1 00:26:15.966 --rc geninfo_all_blocks=1 00:26:15.966 --rc geninfo_unexecuted_blocks=1 00:26:15.966 00:26:15.966 ' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:15.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.966 --rc genhtml_branch_coverage=1 00:26:15.966 --rc genhtml_function_coverage=1 00:26:15.966 --rc genhtml_legend=1 00:26:15.966 --rc geninfo_all_blocks=1 00:26:15.966 --rc geninfo_unexecuted_blocks=1 00:26:15.966 00:26:15.966 ' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:15.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.966 --rc genhtml_branch_coverage=1 00:26:15.966 --rc genhtml_function_coverage=1 00:26:15.966 --rc genhtml_legend=1 00:26:15.966 --rc geninfo_all_blocks=1 00:26:15.966 --rc geninfo_unexecuted_blocks=1 00:26:15.966 00:26:15.966 ' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:15.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.966 --rc genhtml_branch_coverage=1 00:26:15.966 --rc genhtml_function_coverage=1 00:26:15.966 --rc genhtml_legend=1 00:26:15.966 --rc geninfo_all_blocks=1 00:26:15.966 --rc geninfo_unexecuted_blocks=1 00:26:15.966 00:26:15.966 ' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:15.966 12:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.967 ************************************ 00:26:15.967 START TEST nvmf_multicontroller 00:26:15.967 ************************************ 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:15.967 * Looking for test storage... 00:26:15.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.967 --rc genhtml_branch_coverage=1 00:26:15.967 --rc genhtml_function_coverage=1 00:26:15.967 --rc genhtml_legend=1 00:26:15.967 --rc geninfo_all_blocks=1 00:26:15.967 --rc geninfo_unexecuted_blocks=1 00:26:15.967 00:26:15.967 ' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.967 --rc genhtml_branch_coverage=1 00:26:15.967 --rc genhtml_function_coverage=1 00:26:15.967 --rc genhtml_legend=1 00:26:15.967 --rc geninfo_all_blocks=1 00:26:15.967 --rc geninfo_unexecuted_blocks=1 00:26:15.967 00:26:15.967 ' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.967 --rc genhtml_branch_coverage=1 00:26:15.967 --rc genhtml_function_coverage=1 00:26:15.967 --rc genhtml_legend=1 00:26:15.967 --rc geninfo_all_blocks=1 00:26:15.967 --rc geninfo_unexecuted_blocks=1 00:26:15.967 00:26:15.967 ' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.967 --rc genhtml_branch_coverage=1 00:26:15.967 --rc genhtml_function_coverage=1 00:26:15.967 --rc genhtml_legend=1 00:26:15.967 --rc geninfo_all_blocks=1 00:26:15.967 --rc geninfo_unexecuted_blocks=1 00:26:15.967 00:26:15.967 ' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.967 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.968 12:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.236 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:21.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:21.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:21.237 Found net devices under 0000:af:00.0: cvl_0_0 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:21.237 Found net devices under 0000:af:00.1: cvl_0_1 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.237 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.496 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.496 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.496 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:21.496 12:32:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:21.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:26:21.496 00:26:21.496 --- 10.0.0.2 ping statistics --- 00:26:21.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.496 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:21.496 00:26:21.496 --- 10.0.0.1 ping statistics --- 00:26:21.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.496 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:21.496 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=261490 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 261490 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 261490 ']' 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.755 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:21.755 [2024-11-06 12:32:53.182318] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:21.755 [2024-11-06 12:32:53.182377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.755 [2024-11-06 12:32:53.254140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:21.755 [2024-11-06 12:32:53.294789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.755 [2024-11-06 12:32:53.294819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.755 [2024-11-06 12:32:53.294826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.755 [2024-11-06 12:32:53.294832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.755 [2024-11-06 12:32:53.294836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.755 [2024-11-06 12:32:53.296165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.755 [2024-11-06 12:32:53.296260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.755 [2024-11-06 12:32:53.296261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 [2024-11-06 12:32:53.450754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 Malloc0 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 [2024-11-06 12:32:53.510723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.014 [2024-11-06 12:32:53.518642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.014 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 Malloc1 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=261579 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 261579 /var/tmp/bdevperf.sock 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 261579 ']' 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:22.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.015 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.273 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.273 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:26:22.273 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:22.273 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.274 12:32:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.532 NVMe0n1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.532 1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.532 request: 00:26:22.532 { 00:26:22.532 "name": "NVMe0", 00:26:22.532 "trtype": "tcp", 00:26:22.532 "traddr": "10.0.0.2", 00:26:22.532 "adrfam": "ipv4", 00:26:22.532 "trsvcid": "4420", 00:26:22.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.532 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:22.532 "hostaddr": "10.0.0.1", 00:26:22.532 "prchk_reftag": false, 00:26:22.532 "prchk_guard": false, 00:26:22.532 "hdgst": false, 00:26:22.532 "ddgst": false, 00:26:22.532 "allow_unrecognized_csi": false, 00:26:22.532 "method": "bdev_nvme_attach_controller", 00:26:22.532 "req_id": 1 00:26:22.532 } 00:26:22.532 Got JSON-RPC error response 00:26:22.532 response: 00:26:22.532 { 00:26:22.532 "code": -114, 00:26:22.532 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:22.532 } 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.532 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.532 request: 00:26:22.532 { 00:26:22.532 "name": "NVMe0", 00:26:22.532 "trtype": "tcp", 00:26:22.533 "traddr": "10.0.0.2", 00:26:22.533 "adrfam": "ipv4", 00:26:22.533 "trsvcid": "4420", 00:26:22.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:22.533 "hostaddr": "10.0.0.1", 00:26:22.533 "prchk_reftag": false, 00:26:22.533 "prchk_guard": false, 00:26:22.533 "hdgst": false, 00:26:22.533 "ddgst": false, 00:26:22.533 "allow_unrecognized_csi": false, 00:26:22.533 "method": "bdev_nvme_attach_controller", 00:26:22.533 "req_id": 1 00:26:22.533 } 00:26:22.533 Got JSON-RPC error response 00:26:22.533 response: 00:26:22.533 { 00:26:22.533 "code": -114, 00:26:22.533 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:22.533 } 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.533 request: 00:26:22.533 { 00:26:22.533 "name": "NVMe0", 00:26:22.533 "trtype": "tcp", 00:26:22.533 "traddr": "10.0.0.2", 00:26:22.533 "adrfam": "ipv4", 00:26:22.533 "trsvcid": "4420", 00:26:22.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.533 "hostaddr": "10.0.0.1", 00:26:22.533 "prchk_reftag": false, 00:26:22.533 "prchk_guard": false, 00:26:22.533 "hdgst": false, 00:26:22.533 "ddgst": false, 00:26:22.533 "multipath": "disable", 00:26:22.533 "allow_unrecognized_csi": false, 00:26:22.533 "method": "bdev_nvme_attach_controller", 00:26:22.533 "req_id": 1 00:26:22.533 } 00:26:22.533 Got JSON-RPC error response 00:26:22.533 response: 00:26:22.533 { 00:26:22.533 "code": -114, 00:26:22.533 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:22.533 } 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.533 request: 00:26:22.533 { 00:26:22.533 "name": "NVMe0", 00:26:22.533 "trtype": "tcp", 00:26:22.533 "traddr": "10.0.0.2", 00:26:22.533 "adrfam": "ipv4", 00:26:22.533 "trsvcid": "4420", 00:26:22.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.533 "hostaddr": "10.0.0.1", 00:26:22.533 "prchk_reftag": false, 00:26:22.533 "prchk_guard": false, 00:26:22.533 "hdgst": false, 00:26:22.533 "ddgst": false, 00:26:22.533 "multipath": "failover", 00:26:22.533 "allow_unrecognized_csi": false, 00:26:22.533 "method": "bdev_nvme_attach_controller", 00:26:22.533 "req_id": 1 00:26:22.533 } 00:26:22.533 Got JSON-RPC error response 00:26:22.533 response: 00:26:22.533 { 00:26:22.533 "code": -114, 00:26:22.533 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:22.533 } 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.533 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.791 NVMe0n1 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.791 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:23.050 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:23.050 12:32:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:23.985 { 00:26:23.985 "results": [ 00:26:23.985 { 00:26:23.985 "job": "NVMe0n1", 00:26:23.985 "core_mask": "0x1", 00:26:23.985 "workload": "write", 00:26:23.985 "status": "finished", 00:26:23.985 "queue_depth": 128, 00:26:23.985 "io_size": 4096, 00:26:23.985 "runtime": 1.008063, 00:26:23.985 "iops": 26707.656168314876, 00:26:23.985 "mibps": 104.32678190747998, 00:26:23.985 "io_failed": 0, 00:26:23.985 "io_timeout": 0, 00:26:23.985 "avg_latency_us": 4781.440641830405, 00:26:23.985 "min_latency_us": 2398.021818181818, 00:26:23.985 "max_latency_us": 9830.4 00:26:23.985 } 00:26:23.985 ], 00:26:23.985 "core_count": 1 00:26:23.985 } 00:26:23.985 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:23.985 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.985 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 261579 ']' 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 261579' 00:26:24.244 killing process with pid 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 261579 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.244 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.502 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:26:24.503 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:24.503 [2024-11-06 12:32:53.623184] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:24.503 [2024-11-06 12:32:53.623248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261579 ] 00:26:24.503 [2024-11-06 12:32:53.716212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.503 [2024-11-06 12:32:53.766909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.503 [2024-11-06 12:32:54.414401] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 82923d1e-a66c-4ac0-8653-df2d7cd7d95d already exists 00:26:24.503 [2024-11-06 12:32:54.414433] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:82923d1e-a66c-4ac0-8653-df2d7cd7d95d alias for bdev NVMe1n1 00:26:24.503 [2024-11-06 12:32:54.414445] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:24.503 Running I/O for 1 seconds... 00:26:24.503 26697.00 IOPS, 104.29 MiB/s 00:26:24.503 Latency(us) 00:26:24.503 [2024-11-06T11:32:56.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.503 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:24.503 NVMe0n1 : 1.01 26707.66 104.33 0.00 0.00 4781.44 2398.02 9830.40 00:26:24.503 [2024-11-06T11:32:56.118Z] =================================================================================================================== 00:26:24.503 [2024-11-06T11:32:56.118Z] Total : 26707.66 104.33 0.00 0.00 4781.44 2398.02 9830.40 00:26:24.503 Received shutdown signal, test time was about 1.000000 seconds 00:26:24.503 00:26:24.503 Latency(us) 00:26:24.503 [2024-11-06T11:32:56.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.503 [2024-11-06T11:32:56.118Z] =================================================================================================================== 00:26:24.503 [2024-11-06T11:32:56.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.503 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.503 rmmod nvme_tcp 00:26:24.503 rmmod nvme_fabrics 00:26:24.503 rmmod nvme_keyring 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 261490 ']' 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 261490 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 261490 ']' 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 261490 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.503 12:32:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 261490 00:26:24.503 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:24.503 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:24.503 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 261490' 00:26:24.503 killing process with pid 261490 00:26:24.503 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 261490 00:26:24.503 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 261490 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.761 12:32:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.293 00:26:27.293 real 0m10.998s 00:26:27.293 user 0m12.903s 00:26:27.293 sys 0m5.056s 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 ************************************ 00:26:27.293 END TEST nvmf_multicontroller 00:26:27.293 ************************************ 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.293 ************************************ 00:26:27.293 START TEST nvmf_aer 00:26:27.293 ************************************ 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:27.293 * Looking for test storage... 00:26:27.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:27.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.293 --rc genhtml_branch_coverage=1 00:26:27.293 --rc genhtml_function_coverage=1 00:26:27.293 --rc genhtml_legend=1 00:26:27.293 --rc geninfo_all_blocks=1 00:26:27.293 --rc geninfo_unexecuted_blocks=1 00:26:27.293 00:26:27.293 ' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:27.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.293 --rc genhtml_branch_coverage=1 00:26:27.293 --rc genhtml_function_coverage=1 00:26:27.293 --rc genhtml_legend=1 00:26:27.293 --rc geninfo_all_blocks=1 00:26:27.293 --rc geninfo_unexecuted_blocks=1 00:26:27.293 00:26:27.293 ' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:27.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.293 --rc genhtml_branch_coverage=1 00:26:27.293 --rc genhtml_function_coverage=1 00:26:27.293 --rc genhtml_legend=1 00:26:27.293 --rc geninfo_all_blocks=1 00:26:27.293 --rc geninfo_unexecuted_blocks=1 00:26:27.293 00:26:27.293 ' 00:26:27.293 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:27.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.293 --rc genhtml_branch_coverage=1 00:26:27.293 --rc genhtml_function_coverage=1 00:26:27.293 --rc genhtml_legend=1 00:26:27.294 --rc geninfo_all_blocks=1 00:26:27.294 --rc geninfo_unexecuted_blocks=1 00:26:27.294 00:26:27.294 ' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.294 12:32:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:32.565 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:32.565 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:32.565 Found net devices under 0000:af:00.0: cvl_0_0 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:32.565 Found net devices under 0000:af:00.1: cvl_0_1 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.565 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.566 12:33:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:26:32.566 00:26:32.566 --- 10.0.0.2 ping statistics --- 00:26:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.566 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:26:32.566 00:26:32.566 --- 10.0.0.1 ping statistics --- 00:26:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.566 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=265773 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 265773 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 265773 ']' 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.566 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:32.823 [2024-11-06 12:33:04.214533] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:32.823 [2024-11-06 12:33:04.214593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.823 [2024-11-06 12:33:04.314956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.823 [2024-11-06 12:33:04.365929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.823 [2024-11-06 12:33:04.365972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.823 [2024-11-06 12:33:04.365986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.823 [2024-11-06 12:33:04.365996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.823 [2024-11-06 12:33:04.366003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.823 [2024-11-06 12:33:04.367990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.823 [2024-11-06 12:33:04.368095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.823 [2024-11-06 12:33:04.368224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.823 [2024-11-06 12:33:04.368225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 [2024-11-06 12:33:04.507040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 Malloc0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 [2024-11-06 12:33:04.572442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.081 [ 00:26:33.081 { 00:26:33.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:33.081 "subtype": "Discovery", 00:26:33.081 "listen_addresses": [], 00:26:33.081 "allow_any_host": true, 00:26:33.081 "hosts": [] 00:26:33.081 }, 00:26:33.081 { 00:26:33.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.081 "subtype": "NVMe", 00:26:33.081 "listen_addresses": [ 00:26:33.081 { 00:26:33.081 "trtype": "TCP", 00:26:33.081 "adrfam": "IPv4", 00:26:33.081 "traddr": "10.0.0.2", 00:26:33.081 "trsvcid": "4420" 00:26:33.081 } 00:26:33.081 ], 00:26:33.081 "allow_any_host": true, 00:26:33.081 "hosts": [], 00:26:33.081 "serial_number": "SPDK00000000000001", 00:26:33.081 "model_number": "SPDK bdev Controller", 00:26:33.081 "max_namespaces": 2, 00:26:33.081 "min_cntlid": 1, 00:26:33.081 "max_cntlid": 65519, 00:26:33.081 "namespaces": [ 00:26:33.081 { 00:26:33.081 "nsid": 1, 00:26:33.081 "bdev_name": "Malloc0", 00:26:33.081 "name": "Malloc0", 00:26:33.081 "nguid": "AFBC67B48EA148228DF0656F32F341E6", 00:26:33.081 "uuid": "afbc67b4-8ea1-4822-8df0-656f32f341e6" 00:26:33.081 } 00:26:33.081 ] 00:26:33.081 } 00:26:33.081 ] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=265914 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:26:33.081 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.339 Malloc1 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:33.339 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.340 [ 00:26:33.340 { 00:26:33.340 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:33.340 "subtype": "Discovery", 00:26:33.340 "listen_addresses": [], 00:26:33.340 "allow_any_host": true, 00:26:33.340 "hosts": [] 00:26:33.340 }, 00:26:33.340 { 00:26:33.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.340 "subtype": "NVMe", 00:26:33.340 "listen_addresses": [ 00:26:33.340 { 00:26:33.340 "trtype": "TCP", 00:26:33.340 "adrfam": "IPv4", 00:26:33.340 "traddr": "10.0.0.2", 00:26:33.340 "trsvcid": "4420" 00:26:33.340 } 00:26:33.340 ], 00:26:33.340 "allow_any_host": true, 00:26:33.340 "hosts": [], 00:26:33.340 "serial_number": "SPDK00000000000001", 00:26:33.340 "model_number": "SPDK bdev Controller", 00:26:33.340 "max_namespaces": 2, 00:26:33.340 "min_cntlid": 1, 00:26:33.340 "max_cntlid": 65519, 00:26:33.340 "namespaces": [ 00:26:33.340 { 00:26:33.340 "nsid": 1, 00:26:33.340 Asynchronous Event Request test 00:26:33.340 Attaching to 10.0.0.2 00:26:33.340 Attached to 10.0.0.2 00:26:33.340 Registering asynchronous event callbacks... 00:26:33.340 Starting namespace attribute notice tests for all controllers... 00:26:33.340 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:33.340 aer_cb - Changed Namespace 00:26:33.340 Cleaning up... 00:26:33.340 "bdev_name": "Malloc0", 00:26:33.340 "name": "Malloc0", 00:26:33.340 "nguid": "AFBC67B48EA148228DF0656F32F341E6", 00:26:33.340 "uuid": "afbc67b4-8ea1-4822-8df0-656f32f341e6" 00:26:33.340 }, 00:26:33.340 { 00:26:33.340 "nsid": 2, 00:26:33.340 "bdev_name": "Malloc1", 00:26:33.340 "name": "Malloc1", 00:26:33.340 "nguid": "619F58F3E6714B6EA7F1A7C23CF57790", 00:26:33.340 "uuid": "619f58f3-e671-4b6e-a7f1-a7c23cf57790" 00:26:33.340 } 00:26:33.340 ] 00:26:33.340 } 00:26:33.340 ] 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 265914 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.340 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.340 rmmod nvme_tcp 00:26:33.598 rmmod nvme_fabrics 00:26:33.598 rmmod nvme_keyring 00:26:33.598 12:33:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 265773 ']' 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 265773 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 265773 ']' 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 265773 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 265773 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 265773' 00:26:33.598 killing process with pid 265773 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 265773 00:26:33.598 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 265773 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.856 12:33:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.757 00:26:35.757 real 0m8.948s 00:26:35.757 user 0m5.186s 00:26:35.757 sys 0m4.628s 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:35.757 ************************************ 00:26:35.757 END TEST nvmf_aer 00:26:35.757 ************************************ 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:35.757 12:33:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.015 ************************************ 00:26:36.015 START TEST nvmf_async_init 00:26:36.015 ************************************ 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:36.015 * Looking for test storage... 00:26:36.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.015 --rc genhtml_branch_coverage=1 00:26:36.015 --rc genhtml_function_coverage=1 00:26:36.015 --rc genhtml_legend=1 00:26:36.015 --rc geninfo_all_blocks=1 00:26:36.015 --rc geninfo_unexecuted_blocks=1 00:26:36.015 00:26:36.015 ' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.015 --rc genhtml_branch_coverage=1 00:26:36.015 --rc genhtml_function_coverage=1 00:26:36.015 --rc genhtml_legend=1 00:26:36.015 --rc geninfo_all_blocks=1 00:26:36.015 --rc geninfo_unexecuted_blocks=1 00:26:36.015 00:26:36.015 ' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.015 --rc genhtml_branch_coverage=1 00:26:36.015 --rc genhtml_function_coverage=1 00:26:36.015 --rc genhtml_legend=1 00:26:36.015 --rc geninfo_all_blocks=1 00:26:36.015 --rc geninfo_unexecuted_blocks=1 00:26:36.015 00:26:36.015 ' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:36.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.015 --rc genhtml_branch_coverage=1 00:26:36.015 --rc genhtml_function_coverage=1 00:26:36.015 --rc genhtml_legend=1 00:26:36.015 --rc geninfo_all_blocks=1 00:26:36.015 --rc geninfo_unexecuted_blocks=1 00:26:36.015 00:26:36.015 ' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.015 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:36.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=48b3c4d7e0b645fcba732ff13332dd16 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:36.016 12:33:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:41.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:41.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:41.276 Found net devices under 0000:af:00.0: cvl_0_0 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.276 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:41.277 Found net devices under 0000:af:00.1: cvl_0_1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:26:41.277 00:26:41.277 --- 10.0.0.2 ping statistics --- 00:26:41.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.277 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:26:41.277 00:26:41.277 --- 10.0.0.1 ping statistics --- 00:26:41.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.277 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=269773 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 269773 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 269773 ']' 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.277 12:33:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:41.277 [2024-11-06 12:33:12.758185] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:41.277 [2024-11-06 12:33:12.758244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.277 [2024-11-06 12:33:12.856674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.535 [2024-11-06 12:33:12.905719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.535 [2024-11-06 12:33:12.905759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.535 [2024-11-06 12:33:12.905769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.535 [2024-11-06 12:33:12.905778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.535 [2024-11-06 12:33:12.905785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.535 [2024-11-06 12:33:12.906474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.535 [2024-11-06 12:33:13.053760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.535 null0 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.535 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 48b3c4d7e0b645fcba732ff13332dd16 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.536 [2024-11-06 12:33:13.094030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.536 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.793 nvme0n1 00:26:41.793 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.793 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:41.793 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.793 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.793 [ 00:26:41.794 { 00:26:41.794 "name": "nvme0n1", 00:26:41.794 "aliases": [ 00:26:41.794 "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16" 00:26:41.794 ], 00:26:41.794 "product_name": "NVMe disk", 00:26:41.794 "block_size": 512, 00:26:41.794 "num_blocks": 2097152, 00:26:41.794 "uuid": "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16", 00:26:41.794 "numa_id": 1, 00:26:41.794 "assigned_rate_limits": { 00:26:41.794 "rw_ios_per_sec": 0, 00:26:41.794 "rw_mbytes_per_sec": 0, 00:26:41.794 "r_mbytes_per_sec": 0, 00:26:41.794 "w_mbytes_per_sec": 0 00:26:41.794 }, 00:26:41.794 "claimed": false, 00:26:41.794 "zoned": false, 00:26:41.794 "supported_io_types": { 00:26:41.794 "read": true, 00:26:41.794 "write": true, 00:26:41.794 "unmap": false, 00:26:41.794 "flush": true, 00:26:41.794 "reset": true, 00:26:41.794 "nvme_admin": true, 00:26:41.794 "nvme_io": true, 00:26:41.794 "nvme_io_md": false, 00:26:41.794 "write_zeroes": true, 00:26:41.794 "zcopy": false, 00:26:41.794 "get_zone_info": false, 00:26:41.794 "zone_management": false, 00:26:41.794 "zone_append": false, 00:26:41.794 "compare": true, 00:26:41.794 "compare_and_write": true, 00:26:41.794 "abort": true, 00:26:41.794 "seek_hole": false, 00:26:41.794 "seek_data": false, 00:26:41.794 "copy": true, 00:26:41.794 "nvme_iov_md": false 00:26:41.794 }, 00:26:41.794 "memory_domains": [ 00:26:41.794 { 00:26:41.794 "dma_device_id": "system", 00:26:41.794 "dma_device_type": 1 00:26:41.794 } 00:26:41.794 ], 00:26:41.794 "driver_specific": { 00:26:41.794 "nvme": [ 00:26:41.794 { 00:26:41.794 "trid": { 00:26:41.794 "trtype": "TCP", 00:26:41.794 "adrfam": "IPv4", 00:26:41.794 "traddr": "10.0.0.2", 00:26:41.794 "trsvcid": "4420", 00:26:41.794 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:41.794 }, 00:26:41.794 "ctrlr_data": { 00:26:41.794 "cntlid": 1, 00:26:41.794 "vendor_id": "0x8086", 00:26:41.794 "model_number": "SPDK bdev Controller", 00:26:41.794 "serial_number": "00000000000000000000", 00:26:41.794 "firmware_revision": "25.01", 00:26:41.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:41.794 "oacs": { 00:26:41.794 "security": 0, 00:26:41.794 "format": 0, 00:26:41.794 "firmware": 0, 00:26:41.794 "ns_manage": 0 00:26:41.794 }, 00:26:41.794 "multi_ctrlr": true, 00:26:41.794 "ana_reporting": false 00:26:41.794 }, 00:26:41.794 "vs": { 00:26:41.794 "nvme_version": "1.3" 00:26:41.794 }, 00:26:41.794 "ns_data": { 00:26:41.794 "id": 1, 00:26:41.794 "can_share": true 00:26:41.794 } 00:26:41.794 } 00:26:41.794 ], 00:26:41.794 "mp_policy": "active_passive" 00:26:41.794 } 00:26:41.794 } 00:26:41.794 ] 00:26:41.794 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.794 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:41.794 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.794 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:41.794 [2024-11-06 12:33:13.339585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.794 [2024-11-06 12:33:13.339659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa09210 (9): Bad file descriptor 00:26:42.052 [2024-11-06 12:33:13.471597] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:42.052 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.052 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:42.052 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.052 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.052 [ 00:26:42.052 { 00:26:42.052 "name": "nvme0n1", 00:26:42.052 "aliases": [ 00:26:42.052 "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16" 00:26:42.052 ], 00:26:42.052 "product_name": "NVMe disk", 00:26:42.052 "block_size": 512, 00:26:42.052 "num_blocks": 2097152, 00:26:42.052 "uuid": "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16", 00:26:42.052 "numa_id": 1, 00:26:42.052 "assigned_rate_limits": { 00:26:42.052 "rw_ios_per_sec": 0, 00:26:42.052 "rw_mbytes_per_sec": 0, 00:26:42.052 "r_mbytes_per_sec": 0, 00:26:42.052 "w_mbytes_per_sec": 0 00:26:42.052 }, 00:26:42.052 "claimed": false, 00:26:42.052 "zoned": false, 00:26:42.052 "supported_io_types": { 00:26:42.052 "read": true, 00:26:42.052 "write": true, 00:26:42.052 "unmap": false, 00:26:42.052 "flush": true, 00:26:42.052 "reset": true, 00:26:42.052 "nvme_admin": true, 00:26:42.052 "nvme_io": true, 00:26:42.052 "nvme_io_md": false, 00:26:42.052 "write_zeroes": true, 00:26:42.052 "zcopy": false, 00:26:42.052 "get_zone_info": false, 00:26:42.052 "zone_management": false, 00:26:42.052 "zone_append": false, 00:26:42.052 "compare": true, 00:26:42.052 "compare_and_write": true, 00:26:42.052 "abort": true, 00:26:42.052 "seek_hole": false, 00:26:42.052 "seek_data": false, 00:26:42.052 "copy": true, 00:26:42.052 "nvme_iov_md": false 00:26:42.052 }, 00:26:42.052 "memory_domains": [ 00:26:42.052 { 00:26:42.052 "dma_device_id": "system", 00:26:42.052 "dma_device_type": 1 00:26:42.052 } 00:26:42.052 ], 00:26:42.052 "driver_specific": { 00:26:42.052 "nvme": [ 00:26:42.052 { 00:26:42.052 "trid": { 00:26:42.052 "trtype": "TCP", 00:26:42.052 "adrfam": "IPv4", 00:26:42.052 "traddr": "10.0.0.2", 00:26:42.052 "trsvcid": "4420", 00:26:42.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:42.052 }, 00:26:42.052 "ctrlr_data": { 00:26:42.052 "cntlid": 2, 00:26:42.052 "vendor_id": "0x8086", 00:26:42.052 "model_number": "SPDK bdev Controller", 00:26:42.052 "serial_number": "00000000000000000000", 00:26:42.052 "firmware_revision": "25.01", 00:26:42.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.052 "oacs": { 00:26:42.052 "security": 0, 00:26:42.052 "format": 0, 00:26:42.052 "firmware": 0, 00:26:42.052 "ns_manage": 0 00:26:42.052 }, 00:26:42.052 "multi_ctrlr": true, 00:26:42.052 "ana_reporting": false 00:26:42.052 }, 00:26:42.053 "vs": { 00:26:42.053 "nvme_version": "1.3" 00:26:42.053 }, 00:26:42.053 "ns_data": { 00:26:42.053 "id": 1, 00:26:42.053 "can_share": true 00:26:42.053 } 00:26:42.053 } 00:26:42.053 ], 00:26:42.053 "mp_policy": "active_passive" 00:26:42.053 } 00:26:42.053 } 00:26:42.053 ] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.X2e9A9QMBH 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.X2e9A9QMBH 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.X2e9A9QMBH 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 [2024-11-06 12:33:13.528202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:42.053 [2024-11-06 12:33:13.528333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 [2024-11-06 12:33:13.544269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:42.053 nvme0n1 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 [ 00:26:42.053 { 00:26:42.053 "name": "nvme0n1", 00:26:42.053 "aliases": [ 00:26:42.053 "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16" 00:26:42.053 ], 00:26:42.053 "product_name": "NVMe disk", 00:26:42.053 "block_size": 512, 00:26:42.053 "num_blocks": 2097152, 00:26:42.053 "uuid": "48b3c4d7-e0b6-45fc-ba73-2ff13332dd16", 00:26:42.053 "numa_id": 1, 00:26:42.053 "assigned_rate_limits": { 00:26:42.053 "rw_ios_per_sec": 0, 00:26:42.053 "rw_mbytes_per_sec": 0, 00:26:42.053 "r_mbytes_per_sec": 0, 00:26:42.053 "w_mbytes_per_sec": 0 00:26:42.053 }, 00:26:42.053 "claimed": false, 00:26:42.053 "zoned": false, 00:26:42.053 "supported_io_types": { 00:26:42.053 "read": true, 00:26:42.053 "write": true, 00:26:42.053 "unmap": false, 00:26:42.053 "flush": true, 00:26:42.053 "reset": true, 00:26:42.053 "nvme_admin": true, 00:26:42.053 "nvme_io": true, 00:26:42.053 "nvme_io_md": false, 00:26:42.053 "write_zeroes": true, 00:26:42.053 "zcopy": false, 00:26:42.053 "get_zone_info": false, 00:26:42.053 "zone_management": false, 00:26:42.053 "zone_append": false, 00:26:42.053 "compare": true, 00:26:42.053 "compare_and_write": true, 00:26:42.053 "abort": true, 00:26:42.053 "seek_hole": false, 00:26:42.053 "seek_data": false, 00:26:42.053 "copy": true, 00:26:42.053 "nvme_iov_md": false 00:26:42.053 }, 00:26:42.053 "memory_domains": [ 00:26:42.053 { 00:26:42.053 "dma_device_id": "system", 00:26:42.053 "dma_device_type": 1 00:26:42.053 } 00:26:42.053 ], 00:26:42.053 "driver_specific": { 00:26:42.053 "nvme": [ 00:26:42.053 { 00:26:42.053 "trid": { 00:26:42.053 "trtype": "TCP", 00:26:42.053 "adrfam": "IPv4", 00:26:42.053 "traddr": "10.0.0.2", 00:26:42.053 "trsvcid": "4421", 00:26:42.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:42.053 }, 00:26:42.053 "ctrlr_data": { 00:26:42.053 "cntlid": 3, 00:26:42.053 "vendor_id": "0x8086", 00:26:42.053 "model_number": "SPDK bdev Controller", 00:26:42.053 "serial_number": "00000000000000000000", 00:26:42.053 "firmware_revision": "25.01", 00:26:42.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.053 "oacs": { 00:26:42.053 "security": 0, 00:26:42.053 "format": 0, 00:26:42.053 "firmware": 0, 00:26:42.053 "ns_manage": 0 00:26:42.053 }, 00:26:42.053 "multi_ctrlr": true, 00:26:42.053 "ana_reporting": false 00:26:42.053 }, 00:26:42.053 "vs": { 00:26:42.053 "nvme_version": "1.3" 00:26:42.053 }, 00:26:42.053 "ns_data": { 00:26:42.053 "id": 1, 00:26:42.053 "can_share": true 00:26:42.053 } 00:26:42.053 } 00:26:42.053 ], 00:26:42.053 "mp_policy": "active_passive" 00:26:42.053 } 00:26:42.053 } 00:26:42.053 ] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.X2e9A9QMBH 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.053 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.053 rmmod nvme_tcp 00:26:42.053 rmmod nvme_fabrics 00:26:42.053 rmmod nvme_keyring 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 269773 ']' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 269773 ']' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 269773' 00:26:42.311 killing process with pid 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 269773 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.311 12:33:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.842 12:33:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.842 00:26:44.842 real 0m8.583s 00:26:44.842 user 0m2.626s 00:26:44.842 sys 0m4.149s 00:26:44.842 12:33:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:44.842 12:33:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 ************************************ 00:26:44.842 END TEST nvmf_async_init 00:26:44.842 ************************************ 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 ************************************ 00:26:44.842 START TEST dma 00:26:44.842 ************************************ 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:44.842 * Looking for test storage... 00:26:44.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.842 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.843 --rc genhtml_branch_coverage=1 00:26:44.843 --rc genhtml_function_coverage=1 00:26:44.843 --rc genhtml_legend=1 00:26:44.843 --rc geninfo_all_blocks=1 00:26:44.843 --rc geninfo_unexecuted_blocks=1 00:26:44.843 00:26:44.843 ' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.843 --rc genhtml_branch_coverage=1 00:26:44.843 --rc genhtml_function_coverage=1 00:26:44.843 --rc genhtml_legend=1 00:26:44.843 --rc geninfo_all_blocks=1 00:26:44.843 --rc geninfo_unexecuted_blocks=1 00:26:44.843 00:26:44.843 ' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.843 --rc genhtml_branch_coverage=1 00:26:44.843 --rc genhtml_function_coverage=1 00:26:44.843 --rc genhtml_legend=1 00:26:44.843 --rc geninfo_all_blocks=1 00:26:44.843 --rc geninfo_unexecuted_blocks=1 00:26:44.843 00:26:44.843 ' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:44.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.843 --rc genhtml_branch_coverage=1 00:26:44.843 --rc genhtml_function_coverage=1 00:26:44.843 --rc genhtml_legend=1 00:26:44.843 --rc geninfo_all_blocks=1 00:26:44.843 --rc geninfo_unexecuted_blocks=1 00:26:44.843 00:26:44.843 ' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:44.843 00:26:44.843 real 0m0.183s 00:26:44.843 user 0m0.110s 00:26:44.843 sys 0m0.087s 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:44.843 ************************************ 00:26:44.843 END TEST dma 00:26:44.843 ************************************ 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.843 ************************************ 00:26:44.843 START TEST nvmf_identify 00:26:44.843 ************************************ 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:44.843 * Looking for test storage... 00:26:44.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:44.843 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.103 --rc genhtml_branch_coverage=1 00:26:45.103 --rc genhtml_function_coverage=1 00:26:45.103 --rc genhtml_legend=1 00:26:45.103 --rc geninfo_all_blocks=1 00:26:45.103 --rc geninfo_unexecuted_blocks=1 00:26:45.103 00:26:45.103 ' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.103 --rc genhtml_branch_coverage=1 00:26:45.103 --rc genhtml_function_coverage=1 00:26:45.103 --rc genhtml_legend=1 00:26:45.103 --rc geninfo_all_blocks=1 00:26:45.103 --rc geninfo_unexecuted_blocks=1 00:26:45.103 00:26:45.103 ' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.103 --rc genhtml_branch_coverage=1 00:26:45.103 --rc genhtml_function_coverage=1 00:26:45.103 --rc genhtml_legend=1 00:26:45.103 --rc geninfo_all_blocks=1 00:26:45.103 --rc geninfo_unexecuted_blocks=1 00:26:45.103 00:26:45.103 ' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.103 --rc genhtml_branch_coverage=1 00:26:45.103 --rc genhtml_function_coverage=1 00:26:45.103 --rc genhtml_legend=1 00:26:45.103 --rc geninfo_all_blocks=1 00:26:45.103 --rc geninfo_unexecuted_blocks=1 00:26:45.103 00:26:45.103 ' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.103 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.104 12:33:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.369 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:50.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:50.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:50.370 Found net devices under 0000:af:00.0: cvl_0_0 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:50.370 Found net devices under 0000:af:00.1: cvl_0_1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:26:50.370 00:26:50.370 --- 10.0.0.2 ping statistics --- 00:26:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.370 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:50.370 00:26:50.370 --- 10.0.0.1 ping statistics --- 00:26:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.370 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=273747 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 273747 00:26:50.370 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 273747 ']' 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.371 12:33:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.629 [2024-11-06 12:33:21.987440] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:50.629 [2024-11-06 12:33:21.987508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.629 [2024-11-06 12:33:22.087718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.629 [2024-11-06 12:33:22.139426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.629 [2024-11-06 12:33:22.139470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.629 [2024-11-06 12:33:22.139482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.629 [2024-11-06 12:33:22.139491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.629 [2024-11-06 12:33:22.139499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.629 [2024-11-06 12:33:22.141425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.629 [2024-11-06 12:33:22.141554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.629 [2024-11-06 12:33:22.141575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.629 [2024-11-06 12:33:22.141579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 [2024-11-06 12:33:22.253760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 Malloc0 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 [2024-11-06 12:33:22.352568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.889 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:50.889 [ 00:26:50.889 { 00:26:50.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:50.889 "subtype": "Discovery", 00:26:50.889 "listen_addresses": [ 00:26:50.889 { 00:26:50.889 "trtype": "TCP", 00:26:50.889 "adrfam": "IPv4", 00:26:50.889 "traddr": "10.0.0.2", 00:26:50.889 "trsvcid": "4420" 00:26:50.889 } 00:26:50.889 ], 00:26:50.889 "allow_any_host": true, 00:26:50.889 "hosts": [] 00:26:50.889 }, 00:26:50.889 { 00:26:50.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.890 "subtype": "NVMe", 00:26:50.890 "listen_addresses": [ 00:26:50.890 { 00:26:50.890 "trtype": "TCP", 00:26:50.890 "adrfam": "IPv4", 00:26:50.890 "traddr": "10.0.0.2", 00:26:50.890 "trsvcid": "4420" 00:26:50.890 } 00:26:50.890 ], 00:26:50.890 "allow_any_host": true, 00:26:50.890 "hosts": [], 00:26:50.890 "serial_number": "SPDK00000000000001", 00:26:50.890 "model_number": "SPDK bdev Controller", 00:26:50.890 "max_namespaces": 32, 00:26:50.890 "min_cntlid": 1, 00:26:50.890 "max_cntlid": 65519, 00:26:50.890 "namespaces": [ 00:26:50.890 { 00:26:50.890 "nsid": 1, 00:26:50.890 "bdev_name": "Malloc0", 00:26:50.890 "name": "Malloc0", 00:26:50.890 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:50.890 "eui64": "ABCDEF0123456789", 00:26:50.890 "uuid": "209b37ab-5aad-460f-aa2f-6c4dc3001d5d" 00:26:50.890 } 00:26:50.890 ] 00:26:50.890 } 00:26:50.890 ] 00:26:50.890 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.890 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:50.890 [2024-11-06 12:33:22.403350] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:50.890 [2024-11-06 12:33:22.403386] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273847 ] 00:26:50.890 [2024-11-06 12:33:22.461113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:50.890 [2024-11-06 12:33:22.461176] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:50.890 [2024-11-06 12:33:22.461184] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:50.890 [2024-11-06 12:33:22.461197] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:50.890 [2024-11-06 12:33:22.461210] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:50.890 [2024-11-06 12:33:22.464851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:50.890 [2024-11-06 12:33:22.464892] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x168a550 0 00:26:50.890 [2024-11-06 12:33:22.471475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:50.890 [2024-11-06 12:33:22.471494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:50.890 [2024-11-06 12:33:22.471500] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:50.890 [2024-11-06 12:33:22.471505] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:50.890 [2024-11-06 12:33:22.471545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.471552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.471557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.890 [2024-11-06 12:33:22.471573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:50.890 [2024-11-06 12:33:22.471597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.890 [2024-11-06 12:33:22.478472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.890 [2024-11-06 12:33:22.478486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.890 [2024-11-06 12:33:22.478492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.890 [2024-11-06 12:33:22.478513] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:50.890 [2024-11-06 12:33:22.478522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:50.890 [2024-11-06 12:33:22.478529] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:50.890 [2024-11-06 12:33:22.478548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.890 [2024-11-06 12:33:22.478569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.890 [2024-11-06 12:33:22.478588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.890 [2024-11-06 12:33:22.478763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.890 [2024-11-06 12:33:22.478772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.890 [2024-11-06 12:33:22.478777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.890 [2024-11-06 12:33:22.478789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:50.890 [2024-11-06 12:33:22.478799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:50.890 [2024-11-06 12:33:22.478809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.890 [2024-11-06 12:33:22.478829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.890 [2024-11-06 12:33:22.478844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.890 [2024-11-06 12:33:22.478923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.890 [2024-11-06 12:33:22.478935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.890 [2024-11-06 12:33:22.478940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.890 [2024-11-06 12:33:22.478952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:50.890 [2024-11-06 12:33:22.478962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:50.890 [2024-11-06 12:33:22.478971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.478981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.890 [2024-11-06 12:33:22.478989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.890 [2024-11-06 12:33:22.479003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.890 [2024-11-06 12:33:22.479067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.890 [2024-11-06 12:33:22.479076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.890 [2024-11-06 12:33:22.479080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.479085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.890 [2024-11-06 12:33:22.479092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:50.890 [2024-11-06 12:33:22.479104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.479109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.479114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.890 [2024-11-06 12:33:22.479123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.890 [2024-11-06 12:33:22.479136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.890 [2024-11-06 12:33:22.479199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.890 [2024-11-06 12:33:22.479208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.890 [2024-11-06 12:33:22.479213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.890 [2024-11-06 12:33:22.479217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.890 [2024-11-06 12:33:22.479223] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:50.890 [2024-11-06 12:33:22.479230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:50.891 [2024-11-06 12:33:22.479240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:50.891 [2024-11-06 12:33:22.479351] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:50.891 [2024-11-06 12:33:22.479357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:50.891 [2024-11-06 12:33:22.479367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.479386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.891 [2024-11-06 12:33:22.479406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.891 [2024-11-06 12:33:22.479478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.891 [2024-11-06 12:33:22.479487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.891 [2024-11-06 12:33:22.479491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.891 [2024-11-06 12:33:22.479503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:50.891 [2024-11-06 12:33:22.479514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.479533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.891 [2024-11-06 12:33:22.479547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.891 [2024-11-06 12:33:22.479638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.891 [2024-11-06 12:33:22.479646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.891 [2024-11-06 12:33:22.479650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.891 [2024-11-06 12:33:22.479661] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:50.891 [2024-11-06 12:33:22.479668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.479678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:50.891 [2024-11-06 12:33:22.479688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.479700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.479714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.891 [2024-11-06 12:33:22.479728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.891 [2024-11-06 12:33:22.479823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:50.891 [2024-11-06 12:33:22.479832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:50.891 [2024-11-06 12:33:22.479837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479842] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x168a550): datao=0, datal=4096, cccid=0 00:26:50.891 [2024-11-06 12:33:22.479848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ec100) on tqpair(0x168a550): expected_datao=0, payload_size=4096 00:26:50.891 [2024-11-06 12:33:22.479854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479870] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.891 [2024-11-06 12:33:22.479925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.891 [2024-11-06 12:33:22.479933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.479938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.891 [2024-11-06 12:33:22.479947] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:50.891 [2024-11-06 12:33:22.479954] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:50.891 [2024-11-06 12:33:22.479960] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:50.891 [2024-11-06 12:33:22.479969] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:50.891 [2024-11-06 12:33:22.479976] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:50.891 [2024-11-06 12:33:22.479982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.479996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.480004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:50.891 [2024-11-06 12:33:22.480038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.891 [2024-11-06 12:33:22.480109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.891 [2024-11-06 12:33:22.480118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.891 [2024-11-06 12:33:22.480122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:50.891 [2024-11-06 12:33:22.480137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.891 [2024-11-06 12:33:22.480162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.891 [2024-11-06 12:33:22.480187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.891 [2024-11-06 12:33:22.480211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.891 [2024-11-06 12:33:22.480237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.480248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:50.891 [2024-11-06 12:33:22.480257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.891 [2024-11-06 12:33:22.480287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec100, cid 0, qid 0 00:26:50.891 [2024-11-06 12:33:22.480293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec280, cid 1, qid 0 00:26:50.891 [2024-11-06 12:33:22.480300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec400, cid 2, qid 0 00:26:50.891 [2024-11-06 12:33:22.480306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:50.891 [2024-11-06 12:33:22.480312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec700, cid 4, qid 0 00:26:50.891 [2024-11-06 12:33:22.480401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.891 [2024-11-06 12:33:22.480409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.891 [2024-11-06 12:33:22.480414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec700) on tqpair=0x168a550 00:26:50.891 [2024-11-06 12:33:22.480428] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:50.891 [2024-11-06 12:33:22.480435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:50.891 [2024-11-06 12:33:22.480449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.891 [2024-11-06 12:33:22.480454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x168a550) 00:26:50.891 [2024-11-06 12:33:22.480471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.891 [2024-11-06 12:33:22.480485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec700, cid 4, qid 0 00:26:50.891 [2024-11-06 12:33:22.480572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:50.891 [2024-11-06 12:33:22.480581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:50.891 [2024-11-06 12:33:22.480586] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480590] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x168a550): datao=0, datal=4096, cccid=4 00:26:50.892 [2024-11-06 12:33:22.480596] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ec700) on tqpair(0x168a550): expected_datao=0, payload_size=4096 00:26:50.892 [2024-11-06 12:33:22.480602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480611] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480615] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.892 [2024-11-06 12:33:22.480642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.892 [2024-11-06 12:33:22.480647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec700) on tqpair=0x168a550 00:26:50.892 [2024-11-06 12:33:22.480670] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:50.892 [2024-11-06 12:33:22.480696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x168a550) 00:26:50.892 [2024-11-06 12:33:22.480710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.892 [2024-11-06 12:33:22.480720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x168a550) 00:26:50.892 [2024-11-06 12:33:22.480737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.892 [2024-11-06 12:33:22.480755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec700, cid 4, qid 0 00:26:50.892 [2024-11-06 12:33:22.480762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec880, cid 5, qid 0 00:26:50.892 [2024-11-06 12:33:22.480865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:50.892 [2024-11-06 12:33:22.480873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:50.892 [2024-11-06 12:33:22.480878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480882] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x168a550): datao=0, datal=1024, cccid=4 00:26:50.892 [2024-11-06 12:33:22.480888] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ec700) on tqpair(0x168a550): expected_datao=0, payload_size=1024 00:26:50.892 [2024-11-06 12:33:22.480894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480907] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:50.892 [2024-11-06 12:33:22.480922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:50.892 [2024-11-06 12:33:22.480926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:50.892 [2024-11-06 12:33:22.480931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec880) on tqpair=0x168a550 00:26:51.155 [2024-11-06 12:33:22.523468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.155 [2024-11-06 12:33:22.523484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.155 [2024-11-06 12:33:22.523489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec700) on tqpair=0x168a550 00:26:51.155 [2024-11-06 12:33:22.523511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x168a550) 00:26:51.155 [2024-11-06 12:33:22.523526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.155 [2024-11-06 12:33:22.523549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec700, cid 4, qid 0 00:26:51.155 [2024-11-06 12:33:22.523709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.155 [2024-11-06 12:33:22.523718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.155 [2024-11-06 12:33:22.523723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x168a550): datao=0, datal=3072, cccid=4 00:26:51.155 [2024-11-06 12:33:22.523734] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ec700) on tqpair(0x168a550): expected_datao=0, payload_size=3072 00:26:51.155 [2024-11-06 12:33:22.523740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523781] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523787] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.155 [2024-11-06 12:33:22.523869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.155 [2024-11-06 12:33:22.523874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec700) on tqpair=0x168a550 00:26:51.155 [2024-11-06 12:33:22.523890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.523895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x168a550) 00:26:51.155 [2024-11-06 12:33:22.523904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.155 [2024-11-06 12:33:22.523924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec700, cid 4, qid 0 00:26:51.155 [2024-11-06 12:33:22.523998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.155 [2024-11-06 12:33:22.524007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.155 [2024-11-06 12:33:22.524011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.524016] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x168a550): datao=0, datal=8, cccid=4 00:26:51.155 [2024-11-06 12:33:22.524022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ec700) on tqpair(0x168a550): expected_datao=0, payload_size=8 00:26:51.155 [2024-11-06 12:33:22.524028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.524036] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.524041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.564668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.155 [2024-11-06 12:33:22.564682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.155 [2024-11-06 12:33:22.564687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.155 [2024-11-06 12:33:22.564693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec700) on tqpair=0x168a550 00:26:51.155 ===================================================== 00:26:51.155 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:51.155 ===================================================== 00:26:51.155 Controller Capabilities/Features 00:26:51.155 ================================ 00:26:51.155 Vendor ID: 0000 00:26:51.155 Subsystem Vendor ID: 0000 00:26:51.155 Serial Number: .................... 00:26:51.155 Model Number: ........................................ 00:26:51.155 Firmware Version: 25.01 00:26:51.155 Recommended Arb Burst: 0 00:26:51.155 IEEE OUI Identifier: 00 00 00 00:26:51.155 Multi-path I/O 00:26:51.156 May have multiple subsystem ports: No 00:26:51.156 May have multiple controllers: No 00:26:51.156 Associated with SR-IOV VF: No 00:26:51.156 Max Data Transfer Size: 131072 00:26:51.156 Max Number of Namespaces: 0 00:26:51.156 Max Number of I/O Queues: 1024 00:26:51.156 NVMe Specification Version (VS): 1.3 00:26:51.156 NVMe Specification Version (Identify): 1.3 00:26:51.156 Maximum Queue Entries: 128 00:26:51.156 Contiguous Queues Required: Yes 00:26:51.156 Arbitration Mechanisms Supported 00:26:51.156 Weighted Round Robin: Not Supported 00:26:51.156 Vendor Specific: Not Supported 00:26:51.156 Reset Timeout: 15000 ms 00:26:51.156 Doorbell Stride: 4 bytes 00:26:51.156 NVM Subsystem Reset: Not Supported 00:26:51.156 Command Sets Supported 00:26:51.156 NVM Command Set: Supported 00:26:51.156 Boot Partition: Not Supported 00:26:51.156 Memory Page Size Minimum: 4096 bytes 00:26:51.156 Memory Page Size Maximum: 4096 bytes 00:26:51.156 Persistent Memory Region: Not Supported 00:26:51.156 Optional Asynchronous Events Supported 00:26:51.156 Namespace Attribute Notices: Not Supported 00:26:51.156 Firmware Activation Notices: Not Supported 00:26:51.156 ANA Change Notices: Not Supported 00:26:51.156 PLE Aggregate Log Change Notices: Not Supported 00:26:51.156 LBA Status Info Alert Notices: Not Supported 00:26:51.156 EGE Aggregate Log Change Notices: Not Supported 00:26:51.156 Normal NVM Subsystem Shutdown event: Not Supported 00:26:51.156 Zone Descriptor Change Notices: Not Supported 00:26:51.156 Discovery Log Change Notices: Supported 00:26:51.156 Controller Attributes 00:26:51.156 128-bit Host Identifier: Not Supported 00:26:51.156 Non-Operational Permissive Mode: Not Supported 00:26:51.156 NVM Sets: Not Supported 00:26:51.156 Read Recovery Levels: Not Supported 00:26:51.156 Endurance Groups: Not Supported 00:26:51.156 Predictable Latency Mode: Not Supported 00:26:51.156 Traffic Based Keep ALive: Not Supported 00:26:51.156 Namespace Granularity: Not Supported 00:26:51.156 SQ Associations: Not Supported 00:26:51.156 UUID List: Not Supported 00:26:51.156 Multi-Domain Subsystem: Not Supported 00:26:51.156 Fixed Capacity Management: Not Supported 00:26:51.156 Variable Capacity Management: Not Supported 00:26:51.156 Delete Endurance Group: Not Supported 00:26:51.156 Delete NVM Set: Not Supported 00:26:51.156 Extended LBA Formats Supported: Not Supported 00:26:51.156 Flexible Data Placement Supported: Not Supported 00:26:51.156 00:26:51.156 Controller Memory Buffer Support 00:26:51.156 ================================ 00:26:51.156 Supported: No 00:26:51.156 00:26:51.156 Persistent Memory Region Support 00:26:51.156 ================================ 00:26:51.156 Supported: No 00:26:51.156 00:26:51.156 Admin Command Set Attributes 00:26:51.156 ============================ 00:26:51.156 Security Send/Receive: Not Supported 00:26:51.156 Format NVM: Not Supported 00:26:51.156 Firmware Activate/Download: Not Supported 00:26:51.156 Namespace Management: Not Supported 00:26:51.156 Device Self-Test: Not Supported 00:26:51.156 Directives: Not Supported 00:26:51.156 NVMe-MI: Not Supported 00:26:51.156 Virtualization Management: Not Supported 00:26:51.156 Doorbell Buffer Config: Not Supported 00:26:51.156 Get LBA Status Capability: Not Supported 00:26:51.156 Command & Feature Lockdown Capability: Not Supported 00:26:51.156 Abort Command Limit: 1 00:26:51.156 Async Event Request Limit: 4 00:26:51.156 Number of Firmware Slots: N/A 00:26:51.156 Firmware Slot 1 Read-Only: N/A 00:26:51.156 Firmware Activation Without Reset: N/A 00:26:51.156 Multiple Update Detection Support: N/A 00:26:51.156 Firmware Update Granularity: No Information Provided 00:26:51.156 Per-Namespace SMART Log: No 00:26:51.156 Asymmetric Namespace Access Log Page: Not Supported 00:26:51.156 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:51.156 Command Effects Log Page: Not Supported 00:26:51.156 Get Log Page Extended Data: Supported 00:26:51.156 Telemetry Log Pages: Not Supported 00:26:51.156 Persistent Event Log Pages: Not Supported 00:26:51.156 Supported Log Pages Log Page: May Support 00:26:51.156 Commands Supported & Effects Log Page: Not Supported 00:26:51.156 Feature Identifiers & Effects Log Page:May Support 00:26:51.156 NVMe-MI Commands & Effects Log Page: May Support 00:26:51.156 Data Area 4 for Telemetry Log: Not Supported 00:26:51.156 Error Log Page Entries Supported: 128 00:26:51.156 Keep Alive: Not Supported 00:26:51.156 00:26:51.156 NVM Command Set Attributes 00:26:51.156 ========================== 00:26:51.156 Submission Queue Entry Size 00:26:51.156 Max: 1 00:26:51.156 Min: 1 00:26:51.156 Completion Queue Entry Size 00:26:51.156 Max: 1 00:26:51.156 Min: 1 00:26:51.156 Number of Namespaces: 0 00:26:51.156 Compare Command: Not Supported 00:26:51.156 Write Uncorrectable Command: Not Supported 00:26:51.156 Dataset Management Command: Not Supported 00:26:51.156 Write Zeroes Command: Not Supported 00:26:51.156 Set Features Save Field: Not Supported 00:26:51.156 Reservations: Not Supported 00:26:51.156 Timestamp: Not Supported 00:26:51.156 Copy: Not Supported 00:26:51.156 Volatile Write Cache: Not Present 00:26:51.156 Atomic Write Unit (Normal): 1 00:26:51.156 Atomic Write Unit (PFail): 1 00:26:51.156 Atomic Compare & Write Unit: 1 00:26:51.156 Fused Compare & Write: Supported 00:26:51.156 Scatter-Gather List 00:26:51.156 SGL Command Set: Supported 00:26:51.156 SGL Keyed: Supported 00:26:51.156 SGL Bit Bucket Descriptor: Not Supported 00:26:51.156 SGL Metadata Pointer: Not Supported 00:26:51.156 Oversized SGL: Not Supported 00:26:51.156 SGL Metadata Address: Not Supported 00:26:51.156 SGL Offset: Supported 00:26:51.156 Transport SGL Data Block: Not Supported 00:26:51.156 Replay Protected Memory Block: Not Supported 00:26:51.156 00:26:51.156 Firmware Slot Information 00:26:51.156 ========================= 00:26:51.156 Active slot: 0 00:26:51.156 00:26:51.156 00:26:51.156 Error Log 00:26:51.156 ========= 00:26:51.156 00:26:51.156 Active Namespaces 00:26:51.156 ================= 00:26:51.156 Discovery Log Page 00:26:51.156 ================== 00:26:51.156 Generation Counter: 2 00:26:51.156 Number of Records: 2 00:26:51.156 Record Format: 0 00:26:51.156 00:26:51.156 Discovery Log Entry 0 00:26:51.156 ---------------------- 00:26:51.156 Transport Type: 3 (TCP) 00:26:51.156 Address Family: 1 (IPv4) 00:26:51.156 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:51.156 Entry Flags: 00:26:51.156 Duplicate Returned Information: 1 00:26:51.156 Explicit Persistent Connection Support for Discovery: 1 00:26:51.156 Transport Requirements: 00:26:51.156 Secure Channel: Not Required 00:26:51.156 Port ID: 0 (0x0000) 00:26:51.156 Controller ID: 65535 (0xffff) 00:26:51.156 Admin Max SQ Size: 128 00:26:51.156 Transport Service Identifier: 4420 00:26:51.156 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:51.156 Transport Address: 10.0.0.2 00:26:51.156 Discovery Log Entry 1 00:26:51.156 ---------------------- 00:26:51.156 Transport Type: 3 (TCP) 00:26:51.156 Address Family: 1 (IPv4) 00:26:51.156 Subsystem Type: 2 (NVM Subsystem) 00:26:51.156 Entry Flags: 00:26:51.156 Duplicate Returned Information: 0 00:26:51.156 Explicit Persistent Connection Support for Discovery: 0 00:26:51.156 Transport Requirements: 00:26:51.156 Secure Channel: Not Required 00:26:51.156 Port ID: 0 (0x0000) 00:26:51.156 Controller ID: 65535 (0xffff) 00:26:51.156 Admin Max SQ Size: 128 00:26:51.156 Transport Service Identifier: 4420 00:26:51.156 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:51.156 Transport Address: 10.0.0.2 [2024-11-06 12:33:22.564803] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:51.156 [2024-11-06 12:33:22.564818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec100) on tqpair=0x168a550 00:26:51.156 [2024-11-06 12:33:22.564827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.156 [2024-11-06 12:33:22.564834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec280) on tqpair=0x168a550 00:26:51.156 [2024-11-06 12:33:22.564840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.156 [2024-11-06 12:33:22.564846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec400) on tqpair=0x168a550 00:26:51.156 [2024-11-06 12:33:22.564853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.156 [2024-11-06 12:33:22.564859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.156 [2024-11-06 12:33:22.564865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.156 [2024-11-06 12:33:22.564879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.156 [2024-11-06 12:33:22.564884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.156 [2024-11-06 12:33:22.564889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.564899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.564919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.564998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565169] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:51.157 [2024-11-06 12:33:22.565175] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:51.157 [2024-11-06 12:33:22.565187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.565875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.565883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.565888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.565905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.565915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.565924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.565937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.566001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.566010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.566015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.566031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.566052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.566065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.566152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.566160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.566165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.566181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.566199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.566213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.566303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.566312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.566316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.566333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.566351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.566365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.566432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.566440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.566445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.157 [2024-11-06 12:33:22.566468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.157 [2024-11-06 12:33:22.566487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.157 [2024-11-06 12:33:22.566502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.157 [2024-11-06 12:33:22.566607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.157 [2024-11-06 12:33:22.566616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.157 [2024-11-06 12:33:22.566621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.157 [2024-11-06 12:33:22.566626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.566637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.566656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.566672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.566771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.566779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.566783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.566800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.566818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.566833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.566908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.566917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.566921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.566938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.566948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.566956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.566970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.567036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.567044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.567048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.567065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.567084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.567098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.567162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.567171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.567175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.567192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.567210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.567226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.567313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.567321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.567326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.567343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.567352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.567361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.567375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.571470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.571482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.571486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.571491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.571504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.571509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.571514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x168a550) 00:26:51.158 [2024-11-06 12:33:22.571523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.571538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ec580, cid 3, qid 0 00:26:51.158 [2024-11-06 12:33:22.571697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.571705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.571709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.571714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ec580) on tqpair=0x168a550 00:26:51.158 [2024-11-06 12:33:22.571724] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:26:51.158 00:26:51.158 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:51.158 [2024-11-06 12:33:22.614948] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:51.158 [2024-11-06 12:33:22.614984] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273849 ] 00:26:51.158 [2024-11-06 12:33:22.672583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:51.158 [2024-11-06 12:33:22.672635] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:51.158 [2024-11-06 12:33:22.672642] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:51.158 [2024-11-06 12:33:22.672658] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:51.158 [2024-11-06 12:33:22.672669] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:51.158 [2024-11-06 12:33:22.673093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:51.158 [2024-11-06 12:33:22.673128] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x87a550 0 00:26:51.158 [2024-11-06 12:33:22.683473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:51.158 [2024-11-06 12:33:22.683491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:51.158 [2024-11-06 12:33:22.683497] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:51.158 [2024-11-06 12:33:22.683502] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:51.158 [2024-11-06 12:33:22.683537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.683544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.683549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.158 [2024-11-06 12:33:22.683563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:51.158 [2024-11-06 12:33:22.683586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.158 [2024-11-06 12:33:22.694472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.694484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.694489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.694495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.158 [2024-11-06 12:33:22.694509] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:51.158 [2024-11-06 12:33:22.694517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:51.158 [2024-11-06 12:33:22.694525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:51.158 [2024-11-06 12:33:22.694540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.694546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.694550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.158 [2024-11-06 12:33:22.694561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.158 [2024-11-06 12:33:22.694580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.158 [2024-11-06 12:33:22.694670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.158 [2024-11-06 12:33:22.694679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.158 [2024-11-06 12:33:22.694684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.158 [2024-11-06 12:33:22.694689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.158 [2024-11-06 12:33:22.694695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:51.158 [2024-11-06 12:33:22.694705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:51.158 [2024-11-06 12:33:22.694714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.694733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.694748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.694816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.694827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.694832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.694844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:51.159 [2024-11-06 12:33:22.694854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.694863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.694882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.694896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.694969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.694977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.694982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.694987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.694994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.695005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.695024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.695038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.695104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.695113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.695117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.695128] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:51.159 [2024-11-06 12:33:22.695134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.695144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.695254] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:51.159 [2024-11-06 12:33:22.695261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.695271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.695289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.695306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.695371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.695380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.695384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.695396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:51.159 [2024-11-06 12:33:22.695407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.695426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.695440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.695505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.695515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.695520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.695530] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:51.159 [2024-11-06 12:33:22.695536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:51.159 [2024-11-06 12:33:22.695546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:51.159 [2024-11-06 12:33:22.695559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:51.159 [2024-11-06 12:33:22.695569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.695583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.159 [2024-11-06 12:33:22.695598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.695699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.159 [2024-11-06 12:33:22.695708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.159 [2024-11-06 12:33:22.695713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=4096, cccid=0 00:26:51.159 [2024-11-06 12:33:22.695724] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc100) on tqpair(0x87a550): expected_datao=0, payload_size=4096 00:26:51.159 [2024-11-06 12:33:22.695730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.695767] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.739487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.739492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.739511] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:51.159 [2024-11-06 12:33:22.739518] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:51.159 [2024-11-06 12:33:22.739523] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:51.159 [2024-11-06 12:33:22.739533] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:51.159 [2024-11-06 12:33:22.739540] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:51.159 [2024-11-06 12:33:22.739546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:51.159 [2024-11-06 12:33:22.739560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:51.159 [2024-11-06 12:33:22.739570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.739591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:51.159 [2024-11-06 12:33:22.739608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.159 [2024-11-06 12:33:22.739678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.159 [2024-11-06 12:33:22.739687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.159 [2024-11-06 12:33:22.739692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.159 [2024-11-06 12:33:22.739705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.739723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.159 [2024-11-06 12:33:22.739731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x87a550) 00:26:51.159 [2024-11-06 12:33:22.739748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.159 [2024-11-06 12:33:22.739756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.159 [2024-11-06 12:33:22.739761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.739765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.739773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.160 [2024-11-06 12:33:22.739781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.739785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.739790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.739797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.160 [2024-11-06 12:33:22.739804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.739818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.739827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.739832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.739841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.160 [2024-11-06 12:33:22.739857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc100, cid 0, qid 0 00:26:51.160 [2024-11-06 12:33:22.739864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc280, cid 1, qid 0 00:26:51.160 [2024-11-06 12:33:22.739871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc400, cid 2, qid 0 00:26:51.160 [2024-11-06 12:33:22.739877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.160 [2024-11-06 12:33:22.739883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.160 [2024-11-06 12:33:22.739971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.160 [2024-11-06 12:33:22.739980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.160 [2024-11-06 12:33:22.739984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.739989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.160 [2024-11-06 12:33:22.739998] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:51.160 [2024-11-06 12:33:22.740005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.740051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:51.160 [2024-11-06 12:33:22.740065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.160 [2024-11-06 12:33:22.740138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.160 [2024-11-06 12:33:22.740147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.160 [2024-11-06 12:33:22.740151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.160 [2024-11-06 12:33:22.740234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.740270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.160 [2024-11-06 12:33:22.740287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.160 [2024-11-06 12:33:22.740374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.160 [2024-11-06 12:33:22.740383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.160 [2024-11-06 12:33:22.740388] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=4096, cccid=4 00:26:51.160 [2024-11-06 12:33:22.740399] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc700) on tqpair(0x87a550): expected_datao=0, payload_size=4096 00:26:51.160 [2024-11-06 12:33:22.740404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740418] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.160 [2024-11-06 12:33:22.740438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.160 [2024-11-06 12:33:22.740443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.160 [2024-11-06 12:33:22.740465] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:51.160 [2024-11-06 12:33:22.740477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.740514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.160 [2024-11-06 12:33:22.740529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.160 [2024-11-06 12:33:22.740620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.160 [2024-11-06 12:33:22.740629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.160 [2024-11-06 12:33:22.740634] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740638] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=4096, cccid=4 00:26:51.160 [2024-11-06 12:33:22.740644] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc700) on tqpair(0x87a550): expected_datao=0, payload_size=4096 00:26:51.160 [2024-11-06 12:33:22.740650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740658] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740663] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.160 [2024-11-06 12:33:22.740682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.160 [2024-11-06 12:33:22.740687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.160 [2024-11-06 12:33:22.740706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:51.160 [2024-11-06 12:33:22.740728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.160 [2024-11-06 12:33:22.740744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.160 [2024-11-06 12:33:22.740759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.160 [2024-11-06 12:33:22.740838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.160 [2024-11-06 12:33:22.740847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.160 [2024-11-06 12:33:22.740852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740856] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=4096, cccid=4 00:26:51.160 [2024-11-06 12:33:22.740862] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc700) on tqpair(0x87a550): expected_datao=0, payload_size=4096 00:26:51.160 [2024-11-06 12:33:22.740868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740876] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.160 [2024-11-06 12:33:22.740881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.740896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.740903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.740908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.740913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.740922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740958] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740971] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:51.161 [2024-11-06 12:33:22.740978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:51.161 [2024-11-06 12:33:22.740984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:51.161 [2024-11-06 12:33:22.741000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.161 [2024-11-06 12:33:22.741061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.161 [2024-11-06 12:33:22.741068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc880, cid 5, qid 0 00:26:51.161 [2024-11-06 12:33:22.741158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.741167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.741172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.741185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.741192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.741197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc880) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.741215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc880, cid 5, qid 0 00:26:51.161 [2024-11-06 12:33:22.741324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.741333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.741338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc880) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.741354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc880, cid 5, qid 0 00:26:51.161 [2024-11-06 12:33:22.741468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.741476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.741481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc880) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.741499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc880, cid 5, qid 0 00:26:51.161 [2024-11-06 12:33:22.741596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.161 [2024-11-06 12:33:22.741605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.161 [2024-11-06 12:33:22.741609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc880) on tqpair=0x87a550 00:26:51.161 [2024-11-06 12:33:22.741631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x87a550) 00:26:51.161 [2024-11-06 12:33:22.741717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.161 [2024-11-06 12:33:22.741733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc880, cid 5, qid 0 00:26:51.161 [2024-11-06 12:33:22.741740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc700, cid 4, qid 0 00:26:51.161 [2024-11-06 12:33:22.741746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dca00, cid 6, qid 0 00:26:51.161 [2024-11-06 12:33:22.741752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dcb80, cid 7, qid 0 00:26:51.161 [2024-11-06 12:33:22.741878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.161 [2024-11-06 12:33:22.741887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.161 [2024-11-06 12:33:22.741892] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741896] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=8192, cccid=5 00:26:51.161 [2024-11-06 12:33:22.741902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc880) on tqpair(0x87a550): expected_datao=0, payload_size=8192 00:26:51.161 [2024-11-06 12:33:22.741908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741932] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.161 [2024-11-06 12:33:22.741951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.161 [2024-11-06 12:33:22.741955] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741960] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=512, cccid=4 00:26:51.161 [2024-11-06 12:33:22.741966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dc700) on tqpair(0x87a550): expected_datao=0, payload_size=512 00:26:51.161 [2024-11-06 12:33:22.741972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741980] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741985] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.741992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.161 [2024-11-06 12:33:22.741999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.161 [2024-11-06 12:33:22.742004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=512, cccid=6 00:26:51.161 [2024-11-06 12:33:22.742014] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dca00) on tqpair(0x87a550): expected_datao=0, payload_size=512 00:26:51.161 [2024-11-06 12:33:22.742022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742030] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:51.161 [2024-11-06 12:33:22.742050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:51.161 [2024-11-06 12:33:22.742054] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742059] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x87a550): datao=0, datal=4096, cccid=7 00:26:51.161 [2024-11-06 12:33:22.742065] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8dcb80) on tqpair(0x87a550): expected_datao=0, payload_size=4096 00:26:51.161 [2024-11-06 12:33:22.742070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:51.161 [2024-11-06 12:33:22.742094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.162 [2024-11-06 12:33:22.742101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.162 [2024-11-06 12:33:22.742105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.162 [2024-11-06 12:33:22.742110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc880) on tqpair=0x87a550 00:26:51.162 [2024-11-06 12:33:22.742125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.162 [2024-11-06 12:33:22.742132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.162 [2024-11-06 12:33:22.742137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.162 [2024-11-06 12:33:22.742142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc700) on tqpair=0x87a550 00:26:51.162 [2024-11-06 12:33:22.742154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.162 [2024-11-06 12:33:22.742162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.162 [2024-11-06 12:33:22.742167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.162 [2024-11-06 12:33:22.742172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dca00) on tqpair=0x87a550 00:26:51.162 [2024-11-06 12:33:22.742181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.162 [2024-11-06 12:33:22.742188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.162 [2024-11-06 12:33:22.742193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.162 [2024-11-06 12:33:22.742198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dcb80) on tqpair=0x87a550 00:26:51.162 ===================================================== 00:26:51.162 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.162 ===================================================== 00:26:51.162 Controller Capabilities/Features 00:26:51.162 ================================ 00:26:51.162 Vendor ID: 8086 00:26:51.162 Subsystem Vendor ID: 8086 00:26:51.162 Serial Number: SPDK00000000000001 00:26:51.162 Model Number: SPDK bdev Controller 00:26:51.162 Firmware Version: 25.01 00:26:51.162 Recommended Arb Burst: 6 00:26:51.162 IEEE OUI Identifier: e4 d2 5c 00:26:51.162 Multi-path I/O 00:26:51.162 May have multiple subsystem ports: Yes 00:26:51.162 May have multiple controllers: Yes 00:26:51.162 Associated with SR-IOV VF: No 00:26:51.162 Max Data Transfer Size: 131072 00:26:51.162 Max Number of Namespaces: 32 00:26:51.162 Max Number of I/O Queues: 127 00:26:51.162 NVMe Specification Version (VS): 1.3 00:26:51.162 NVMe Specification Version (Identify): 1.3 00:26:51.162 Maximum Queue Entries: 128 00:26:51.162 Contiguous Queues Required: Yes 00:26:51.162 Arbitration Mechanisms Supported 00:26:51.162 Weighted Round Robin: Not Supported 00:26:51.162 Vendor Specific: Not Supported 00:26:51.162 Reset Timeout: 15000 ms 00:26:51.162 Doorbell Stride: 4 bytes 00:26:51.162 NVM Subsystem Reset: Not Supported 00:26:51.162 Command Sets Supported 00:26:51.162 NVM Command Set: Supported 00:26:51.162 Boot Partition: Not Supported 00:26:51.162 Memory Page Size Minimum: 4096 bytes 00:26:51.162 Memory Page Size Maximum: 4096 bytes 00:26:51.162 Persistent Memory Region: Not Supported 00:26:51.162 Optional Asynchronous Events Supported 00:26:51.162 Namespace Attribute Notices: Supported 00:26:51.162 Firmware Activation Notices: Not Supported 00:26:51.162 ANA Change Notices: Not Supported 00:26:51.162 PLE Aggregate Log Change Notices: Not Supported 00:26:51.162 LBA Status Info Alert Notices: Not Supported 00:26:51.162 EGE Aggregate Log Change Notices: Not Supported 00:26:51.162 Normal NVM Subsystem Shutdown event: Not Supported 00:26:51.162 Zone Descriptor Change Notices: Not Supported 00:26:51.162 Discovery Log Change Notices: Not Supported 00:26:51.162 Controller Attributes 00:26:51.162 128-bit Host Identifier: Supported 00:26:51.162 Non-Operational Permissive Mode: Not Supported 00:26:51.162 NVM Sets: Not Supported 00:26:51.162 Read Recovery Levels: Not Supported 00:26:51.162 Endurance Groups: Not Supported 00:26:51.162 Predictable Latency Mode: Not Supported 00:26:51.162 Traffic Based Keep ALive: Not Supported 00:26:51.162 Namespace Granularity: Not Supported 00:26:51.162 SQ Associations: Not Supported 00:26:51.162 UUID List: Not Supported 00:26:51.162 Multi-Domain Subsystem: Not Supported 00:26:51.162 Fixed Capacity Management: Not Supported 00:26:51.162 Variable Capacity Management: Not Supported 00:26:51.162 Delete Endurance Group: Not Supported 00:26:51.162 Delete NVM Set: Not Supported 00:26:51.162 Extended LBA Formats Supported: Not Supported 00:26:51.162 Flexible Data Placement Supported: Not Supported 00:26:51.162 00:26:51.162 Controller Memory Buffer Support 00:26:51.162 ================================ 00:26:51.162 Supported: No 00:26:51.162 00:26:51.162 Persistent Memory Region Support 00:26:51.162 ================================ 00:26:51.162 Supported: No 00:26:51.162 00:26:51.162 Admin Command Set Attributes 00:26:51.162 ============================ 00:26:51.162 Security Send/Receive: Not Supported 00:26:51.162 Format NVM: Not Supported 00:26:51.162 Firmware Activate/Download: Not Supported 00:26:51.162 Namespace Management: Not Supported 00:26:51.162 Device Self-Test: Not Supported 00:26:51.162 Directives: Not Supported 00:26:51.162 NVMe-MI: Not Supported 00:26:51.162 Virtualization Management: Not Supported 00:26:51.162 Doorbell Buffer Config: Not Supported 00:26:51.162 Get LBA Status Capability: Not Supported 00:26:51.162 Command & Feature Lockdown Capability: Not Supported 00:26:51.162 Abort Command Limit: 4 00:26:51.162 Async Event Request Limit: 4 00:26:51.162 Number of Firmware Slots: N/A 00:26:51.162 Firmware Slot 1 Read-Only: N/A 00:26:51.162 Firmware Activation Without Reset: N/A 00:26:51.162 Multiple Update Detection Support: N/A 00:26:51.162 Firmware Update Granularity: No Information Provided 00:26:51.162 Per-Namespace SMART Log: No 00:26:51.162 Asymmetric Namespace Access Log Page: Not Supported 00:26:51.162 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:51.162 Command Effects Log Page: Supported 00:26:51.162 Get Log Page Extended Data: Supported 00:26:51.162 Telemetry Log Pages: Not Supported 00:26:51.162 Persistent Event Log Pages: Not Supported 00:26:51.162 Supported Log Pages Log Page: May Support 00:26:51.162 Commands Supported & Effects Log Page: Not Supported 00:26:51.162 Feature Identifiers & Effects Log Page:May Support 00:26:51.162 NVMe-MI Commands & Effects Log Page: May Support 00:26:51.162 Data Area 4 for Telemetry Log: Not Supported 00:26:51.162 Error Log Page Entries Supported: 128 00:26:51.162 Keep Alive: Supported 00:26:51.162 Keep Alive Granularity: 10000 ms 00:26:51.162 00:26:51.162 NVM Command Set Attributes 00:26:51.162 ========================== 00:26:51.162 Submission Queue Entry Size 00:26:51.162 Max: 64 00:26:51.162 Min: 64 00:26:51.162 Completion Queue Entry Size 00:26:51.162 Max: 16 00:26:51.162 Min: 16 00:26:51.162 Number of Namespaces: 32 00:26:51.162 Compare Command: Supported 00:26:51.162 Write Uncorrectable Command: Not Supported 00:26:51.162 Dataset Management Command: Supported 00:26:51.162 Write Zeroes Command: Supported 00:26:51.162 Set Features Save Field: Not Supported 00:26:51.162 Reservations: Supported 00:26:51.162 Timestamp: Not Supported 00:26:51.162 Copy: Supported 00:26:51.162 Volatile Write Cache: Present 00:26:51.162 Atomic Write Unit (Normal): 1 00:26:51.162 Atomic Write Unit (PFail): 1 00:26:51.162 Atomic Compare & Write Unit: 1 00:26:51.162 Fused Compare & Write: Supported 00:26:51.162 Scatter-Gather List 00:26:51.162 SGL Command Set: Supported 00:26:51.162 SGL Keyed: Supported 00:26:51.162 SGL Bit Bucket Descriptor: Not Supported 00:26:51.162 SGL Metadata Pointer: Not Supported 00:26:51.162 Oversized SGL: Not Supported 00:26:51.162 SGL Metadata Address: Not Supported 00:26:51.162 SGL Offset: Supported 00:26:51.162 Transport SGL Data Block: Not Supported 00:26:51.162 Replay Protected Memory Block: Not Supported 00:26:51.162 00:26:51.162 Firmware Slot Information 00:26:51.162 ========================= 00:26:51.162 Active slot: 1 00:26:51.162 Slot 1 Firmware Revision: 25.01 00:26:51.162 00:26:51.162 00:26:51.162 Commands Supported and Effects 00:26:51.162 ============================== 00:26:51.162 Admin Commands 00:26:51.162 -------------- 00:26:51.162 Get Log Page (02h): Supported 00:26:51.162 Identify (06h): Supported 00:26:51.162 Abort (08h): Supported 00:26:51.162 Set Features (09h): Supported 00:26:51.162 Get Features (0Ah): Supported 00:26:51.162 Asynchronous Event Request (0Ch): Supported 00:26:51.162 Keep Alive (18h): Supported 00:26:51.162 I/O Commands 00:26:51.162 ------------ 00:26:51.162 Flush (00h): Supported LBA-Change 00:26:51.162 Write (01h): Supported LBA-Change 00:26:51.162 Read (02h): Supported 00:26:51.162 Compare (05h): Supported 00:26:51.162 Write Zeroes (08h): Supported LBA-Change 00:26:51.162 Dataset Management (09h): Supported LBA-Change 00:26:51.162 Copy (19h): Supported LBA-Change 00:26:51.162 00:26:51.162 Error Log 00:26:51.162 ========= 00:26:51.162 00:26:51.162 Arbitration 00:26:51.162 =========== 00:26:51.162 Arbitration Burst: 1 00:26:51.162 00:26:51.162 Power Management 00:26:51.163 ================ 00:26:51.163 Number of Power States: 1 00:26:51.163 Current Power State: Power State #0 00:26:51.163 Power State #0: 00:26:51.163 Max Power: 0.00 W 00:26:51.163 Non-Operational State: Operational 00:26:51.163 Entry Latency: Not Reported 00:26:51.163 Exit Latency: Not Reported 00:26:51.163 Relative Read Throughput: 0 00:26:51.163 Relative Read Latency: 0 00:26:51.163 Relative Write Throughput: 0 00:26:51.163 Relative Write Latency: 0 00:26:51.163 Idle Power: Not Reported 00:26:51.163 Active Power: Not Reported 00:26:51.163 Non-Operational Permissive Mode: Not Supported 00:26:51.163 00:26:51.163 Health Information 00:26:51.163 ================== 00:26:51.163 Critical Warnings: 00:26:51.163 Available Spare Space: OK 00:26:51.163 Temperature: OK 00:26:51.163 Device Reliability: OK 00:26:51.163 Read Only: No 00:26:51.163 Volatile Memory Backup: OK 00:26:51.163 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:51.163 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:51.163 Available Spare: 0% 00:26:51.163 Available Spare Threshold: 0% 00:26:51.163 Life Percentage Used:[2024-11-06 12:33:22.742314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.742330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.742345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dcb80, cid 7, qid 0 00:26:51.163 [2024-11-06 12:33:22.742436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.742445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.742450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dcb80) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742500] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:51.163 [2024-11-06 12:33:22.742514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc100) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.163 [2024-11-06 12:33:22.742533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc280) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.163 [2024-11-06 12:33:22.742547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc400) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.163 [2024-11-06 12:33:22.742559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.163 [2024-11-06 12:33:22.742575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.742594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.742610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.742695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.742704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.742708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.742740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.742758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.742865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.742874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.742878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.742889] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:51.163 [2024-11-06 12:33:22.742895] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:51.163 [2024-11-06 12:33:22.742907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.742917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.742926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.742940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.743008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.743016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.743021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.743041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.743060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.743073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.743175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.743183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.743188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.743206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.743224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.743238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.743317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.743325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.743330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.743347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.743357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.743365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.743379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.747467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.747479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.747484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.747488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.747503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.747508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.747513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x87a550) 00:26:51.163 [2024-11-06 12:33:22.747521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.163 [2024-11-06 12:33:22.747537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8dc580, cid 3, qid 0 00:26:51.163 [2024-11-06 12:33:22.747608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:51.163 [2024-11-06 12:33:22.747617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:51.163 [2024-11-06 12:33:22.747622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:51.163 [2024-11-06 12:33:22.747627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8dc580) on tqpair=0x87a550 00:26:51.163 [2024-11-06 12:33:22.747640] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:26:51.163 0% 00:26:51.163 Data Units Read: 0 00:26:51.163 Data Units Written: 0 00:26:51.163 Host Read Commands: 0 00:26:51.163 Host Write Commands: 0 00:26:51.163 Controller Busy Time: 0 minutes 00:26:51.163 Power Cycles: 0 00:26:51.163 Power On Hours: 0 hours 00:26:51.163 Unsafe Shutdowns: 0 00:26:51.163 Unrecoverable Media Errors: 0 00:26:51.163 Lifetime Error Log Entries: 0 00:26:51.163 Warning Temperature Time: 0 minutes 00:26:51.163 Critical Temperature Time: 0 minutes 00:26:51.163 00:26:51.163 Number of Queues 00:26:51.163 ================ 00:26:51.163 Number of I/O Submission Queues: 127 00:26:51.163 Number of I/O Completion Queues: 127 00:26:51.163 00:26:51.163 Active Namespaces 00:26:51.163 ================= 00:26:51.163 Namespace ID:1 00:26:51.164 Error Recovery Timeout: Unlimited 00:26:51.164 Command Set Identifier: NVM (00h) 00:26:51.164 Deallocate: Supported 00:26:51.164 Deallocated/Unwritten Error: Not Supported 00:26:51.164 Deallocated Read Value: Unknown 00:26:51.164 Deallocate in Write Zeroes: Not Supported 00:26:51.164 Deallocated Guard Field: 0xFFFF 00:26:51.164 Flush: Supported 00:26:51.164 Reservation: Supported 00:26:51.164 Namespace Sharing Capabilities: Multiple Controllers 00:26:51.164 Size (in LBAs): 131072 (0GiB) 00:26:51.164 Capacity (in LBAs): 131072 (0GiB) 00:26:51.164 Utilization (in LBAs): 131072 (0GiB) 00:26:51.164 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:51.164 EUI64: ABCDEF0123456789 00:26:51.164 UUID: 209b37ab-5aad-460f-aa2f-6c4dc3001d5d 00:26:51.164 Thin Provisioning: Not Supported 00:26:51.164 Per-NS Atomic Units: Yes 00:26:51.164 Atomic Boundary Size (Normal): 0 00:26:51.164 Atomic Boundary Size (PFail): 0 00:26:51.164 Atomic Boundary Offset: 0 00:26:51.164 Maximum Single Source Range Length: 65535 00:26:51.164 Maximum Copy Length: 65535 00:26:51.164 Maximum Source Range Count: 1 00:26:51.164 NGUID/EUI64 Never Reused: No 00:26:51.164 Namespace Write Protected: No 00:26:51.164 Number of LBA Formats: 1 00:26:51.164 Current LBA Format: LBA Format #00 00:26:51.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:51.164 00:26:51.164 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.423 rmmod nvme_tcp 00:26:51.423 rmmod nvme_fabrics 00:26:51.423 rmmod nvme_keyring 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 273747 ']' 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 273747 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 273747 ']' 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 273747 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 273747 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 273747' 00:26:51.423 killing process with pid 273747 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 273747 00:26:51.423 12:33:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 273747 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.681 12:33:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.580 12:33:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:53.580 00:26:53.580 real 0m8.914s 00:26:53.580 user 0m5.456s 00:26:53.580 sys 0m4.527s 00:26:53.580 12:33:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:53.580 12:33:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:53.580 ************************************ 00:26:53.580 END TEST nvmf_identify 00:26:53.580 ************************************ 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.840 ************************************ 00:26:53.840 START TEST nvmf_perf 00:26:53.840 ************************************ 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:53.840 * Looking for test storage... 00:26:53.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:53.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.840 --rc genhtml_branch_coverage=1 00:26:53.840 --rc genhtml_function_coverage=1 00:26:53.840 --rc genhtml_legend=1 00:26:53.840 --rc geninfo_all_blocks=1 00:26:53.840 --rc geninfo_unexecuted_blocks=1 00:26:53.840 00:26:53.840 ' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:53.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.840 --rc genhtml_branch_coverage=1 00:26:53.840 --rc genhtml_function_coverage=1 00:26:53.840 --rc genhtml_legend=1 00:26:53.840 --rc geninfo_all_blocks=1 00:26:53.840 --rc geninfo_unexecuted_blocks=1 00:26:53.840 00:26:53.840 ' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:53.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.840 --rc genhtml_branch_coverage=1 00:26:53.840 --rc genhtml_function_coverage=1 00:26:53.840 --rc genhtml_legend=1 00:26:53.840 --rc geninfo_all_blocks=1 00:26:53.840 --rc geninfo_unexecuted_blocks=1 00:26:53.840 00:26:53.840 ' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:53.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.840 --rc genhtml_branch_coverage=1 00:26:53.840 --rc genhtml_function_coverage=1 00:26:53.840 --rc genhtml_legend=1 00:26:53.840 --rc geninfo_all_blocks=1 00:26:53.840 --rc geninfo_unexecuted_blocks=1 00:26:53.840 00:26:53.840 ' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:53.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.840 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.841 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.100 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.100 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.100 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.100 12:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:59.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:59.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.371 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:59.372 Found net devices under 0000:af:00.0: cvl_0_0 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:59.372 Found net devices under 0000:af:00.1: cvl_0_1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:26:59.372 00:26:59.372 --- 10.0.0.2 ping statistics --- 00:26:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.372 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:59.372 00:26:59.372 --- 10.0.0.1 ping statistics --- 00:26:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.372 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=277431 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 277431 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 277431 ']' 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:59.372 12:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:59.372 [2024-11-06 12:33:30.831355] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:26:59.372 [2024-11-06 12:33:30.831417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.372 [2024-11-06 12:33:30.934185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.372 [2024-11-06 12:33:30.987292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.372 [2024-11-06 12:33:30.987332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.372 [2024-11-06 12:33:30.987343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.372 [2024-11-06 12:33:30.987352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.372 [2024-11-06 12:33:30.987361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.631 [2024-11-06 12:33:30.989422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.631 [2024-11-06 12:33:30.989448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.631 [2024-11-06 12:33:30.989550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.631 [2024-11-06 12:33:30.989551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:59.631 12:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:02.918 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:02.918 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:02.918 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:27:02.918 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:03.487 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:03.487 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:27:03.487 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:03.487 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:03.487 12:33:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:03.487 [2024-11-06 12:33:35.079409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.746 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.004 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:04.004 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.263 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:04.263 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:04.263 12:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.521 [2024-11-06 12:33:36.110780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.780 12:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:05.039 12:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:27:05.039 12:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:05.039 12:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:05.039 12:33:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:27:06.417 Initializing NVMe Controllers 00:27:06.417 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:27:06.417 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:27:06.417 Initialization complete. Launching workers. 00:27:06.417 ======================================================== 00:27:06.417 Latency(us) 00:27:06.417 Device Information : IOPS MiB/s Average min max 00:27:06.417 PCIE (0000:86:00.0) NSID 1 from core 0: 68843.64 268.92 463.69 33.26 5291.94 00:27:06.417 ======================================================== 00:27:06.417 Total : 68843.64 268.92 463.69 33.26 5291.94 00:27:06.417 00:27:06.417 12:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:07.814 Initializing NVMe Controllers 00:27:07.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:07.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:07.814 Initialization complete. Launching workers. 00:27:07.814 ======================================================== 00:27:07.814 Latency(us) 00:27:07.814 Device Information : IOPS MiB/s Average min max 00:27:07.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 134.00 0.52 7693.53 119.05 45016.66 00:27:07.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19680.80 7185.88 48502.26 00:27:07.814 ======================================================== 00:27:07.814 Total : 185.00 0.72 10998.13 119.05 48502.26 00:27:07.814 00:27:07.814 12:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.750 Initializing NVMe Controllers 00:27:08.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:08.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:08.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:08.750 Initialization complete. Launching workers. 00:27:08.750 ======================================================== 00:27:08.750 Latency(us) 00:27:08.750 Device Information : IOPS MiB/s Average min max 00:27:08.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10231.30 39.97 3127.14 515.27 8233.42 00:27:08.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.37 14.91 8444.14 6518.68 16060.55 00:27:08.750 ======================================================== 00:27:08.750 Total : 14048.67 54.88 4571.90 515.27 16060.55 00:27:08.750 00:27:09.009 12:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:09.009 12:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:09.009 12:33:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.543 Initializing NVMe Controllers 00:27:11.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.543 Controller IO queue size 128, less than required. 00:27:11.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.543 Controller IO queue size 128, less than required. 00:27:11.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:11.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:11.543 Initialization complete. Launching workers. 00:27:11.543 ======================================================== 00:27:11.543 Latency(us) 00:27:11.543 Device Information : IOPS MiB/s Average min max 00:27:11.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1462.24 365.56 89030.17 59709.33 140805.54 00:27:11.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.80 145.20 230757.84 95160.88 370032.35 00:27:11.543 ======================================================== 00:27:11.543 Total : 2043.04 510.76 129320.87 59709.33 370032.35 00:27:11.543 00:27:11.543 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:11.802 No valid NVMe controllers or AIO or URING devices found 00:27:11.802 Initializing NVMe Controllers 00:27:11.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.802 Controller IO queue size 128, less than required. 00:27:11.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:11.802 Controller IO queue size 128, less than required. 00:27:11.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:11.802 WARNING: Some requested NVMe devices were skipped 00:27:11.802 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:14.339 Initializing NVMe Controllers 00:27:14.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.339 Controller IO queue size 128, less than required. 00:27:14.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:14.339 Controller IO queue size 128, less than required. 00:27:14.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:14.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:14.339 Initialization complete. Launching workers. 00:27:14.339 00:27:14.339 ==================== 00:27:14.339 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:14.339 TCP transport: 00:27:14.339 polls: 8192 00:27:14.339 idle_polls: 4575 00:27:14.339 sock_completions: 3617 00:27:14.339 nvme_completions: 5769 00:27:14.339 submitted_requests: 8726 00:27:14.339 queued_requests: 1 00:27:14.339 00:27:14.339 ==================== 00:27:14.339 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:14.339 TCP transport: 00:27:14.339 polls: 10841 00:27:14.339 idle_polls: 7249 00:27:14.339 sock_completions: 3592 00:27:14.339 nvme_completions: 5157 00:27:14.339 submitted_requests: 7678 00:27:14.339 queued_requests: 1 00:27:14.339 ======================================================== 00:27:14.339 Latency(us) 00:27:14.339 Device Information : IOPS MiB/s Average min max 00:27:14.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1440.37 360.09 90831.44 56009.05 143444.20 00:27:14.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1287.54 321.88 100087.52 54915.55 160870.93 00:27:14.339 ======================================================== 00:27:14.339 Total : 2727.91 681.98 95200.20 54915.55 160870.93 00:27:14.339 00:27:14.339 12:33:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:14.339 12:33:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.599 rmmod nvme_tcp 00:27:14.599 rmmod nvme_fabrics 00:27:14.599 rmmod nvme_keyring 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 277431 ']' 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 277431 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 277431 ']' 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 277431 00:27:14.599 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 277431 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 277431' 00:27:14.858 killing process with pid 277431 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 277431 00:27:14.858 12:33:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 277431 00:27:16.763 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.764 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:18.681 00:27:18.681 real 0m24.685s 00:27:18.681 user 1m7.618s 00:27:18.681 sys 0m7.697s 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 ************************************ 00:27:18.681 END TEST nvmf_perf 00:27:18.681 ************************************ 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.681 12:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 ************************************ 00:27:18.681 START TEST nvmf_fio_host 00:27:18.681 ************************************ 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:18.681 * Looking for test storage... 00:27:18.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:18.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.681 --rc genhtml_branch_coverage=1 00:27:18.681 --rc genhtml_function_coverage=1 00:27:18.681 --rc genhtml_legend=1 00:27:18.681 --rc geninfo_all_blocks=1 00:27:18.681 --rc geninfo_unexecuted_blocks=1 00:27:18.681 00:27:18.681 ' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:18.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.681 --rc genhtml_branch_coverage=1 00:27:18.681 --rc genhtml_function_coverage=1 00:27:18.681 --rc genhtml_legend=1 00:27:18.681 --rc geninfo_all_blocks=1 00:27:18.681 --rc geninfo_unexecuted_blocks=1 00:27:18.681 00:27:18.681 ' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:18.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.681 --rc genhtml_branch_coverage=1 00:27:18.681 --rc genhtml_function_coverage=1 00:27:18.681 --rc genhtml_legend=1 00:27:18.681 --rc geninfo_all_blocks=1 00:27:18.681 --rc geninfo_unexecuted_blocks=1 00:27:18.681 00:27:18.681 ' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:18.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.681 --rc genhtml_branch_coverage=1 00:27:18.681 --rc genhtml_function_coverage=1 00:27:18.681 --rc genhtml_legend=1 00:27:18.681 --rc geninfo_all_blocks=1 00:27:18.681 --rc geninfo_unexecuted_blocks=1 00:27:18.681 00:27:18.681 ' 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.681 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.682 12:33:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:23.952 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:23.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:23.952 Found net devices under 0000:af:00.0: cvl_0_0 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.952 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:23.953 Found net devices under 0000:af:00.1: cvl_0_1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.953 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:27:24.212 00:27:24.212 --- 10.0.0.2 ping statistics --- 00:27:24.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.212 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:27:24.212 00:27:24.212 --- 10.0.0.1 ping statistics --- 00:27:24.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.212 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=283966 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 283966 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 283966 ']' 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:24.212 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.212 [2024-11-06 12:33:55.698260] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:27:24.212 [2024-11-06 12:33:55.698318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.212 [2024-11-06 12:33:55.797058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.471 [2024-11-06 12:33:55.847410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.471 [2024-11-06 12:33:55.847450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.471 [2024-11-06 12:33:55.847465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.471 [2024-11-06 12:33:55.847475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.471 [2024-11-06 12:33:55.847482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.471 [2024-11-06 12:33:55.849535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.471 [2024-11-06 12:33:55.849635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.471 [2024-11-06 12:33:55.849740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.471 [2024-11-06 12:33:55.849741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.471 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:24.471 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:27:24.471 12:33:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:24.730 [2024-11-06 12:33:56.209271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.730 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:24.730 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:24.730 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.730 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:24.989 Malloc1 00:27:24.989 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.248 12:33:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:25.506 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.766 [2024-11-06 12:33:57.362718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.766 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:26.025 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:26.309 12:33:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:26.576 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:26.576 fio-3.35 00:27:26.576 Starting 1 thread 00:27:29.104 00:27:29.104 test: (groupid=0, jobs=1): err= 0: pid=284647: Wed Nov 6 12:34:00 2024 00:27:29.104 read: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(101MiB/2005msec) 00:27:29.104 slat (usec): min=2, max=241, avg= 2.54, stdev= 2.13 00:27:29.104 clat (usec): min=3105, max=10104, avg=5464.05, stdev=413.14 00:27:29.104 lat (usec): min=3137, max=10107, avg=5466.59, stdev=413.10 00:27:29.104 clat percentiles (usec): 00:27:29.104 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:27:29.104 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:27:29.104 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:27:29.104 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7767], 99.95th=[ 8160], 00:27:29.104 | 99.99th=[ 9372] 00:27:29.104 bw ( KiB/s): min=50792, max=51808, per=100.00%, avg=51414.00, stdev=485.63, samples=4 00:27:29.104 iops : min=12698, max=12952, avg=12853.50, stdev=121.41, samples=4 00:27:29.104 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(100MiB/2005msec); 0 zone resets 00:27:29.104 slat (usec): min=2, max=225, avg= 2.60, stdev= 1.54 00:27:29.104 clat (usec): min=2445, max=8963, avg=4465.24, stdev=351.24 00:27:29.104 lat (usec): min=2461, max=8966, avg=4467.84, stdev=351.28 00:27:29.104 clat percentiles (usec): 00:27:29.104 | 1.00th=[ 3720], 5.00th=[ 3949], 10.00th=[ 4080], 20.00th=[ 4228], 00:27:29.104 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4555], 00:27:29.104 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:27:29.104 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 6980], 99.95th=[ 8029], 00:27:29.104 | 99.99th=[ 8979] 00:27:29.104 bw ( KiB/s): min=50584, max=51816, per=99.98%, avg=51280.00, stdev=527.80, samples=4 00:27:29.104 iops : min=12646, max=12954, avg=12820.00, stdev=131.95, samples=4 00:27:29.104 lat (msec) : 4=3.25%, 10=96.75%, 20=0.01% 00:27:29.104 cpu : usr=78.59%, sys=19.36%, ctx=64, majf=0, minf=3 00:27:29.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:29.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.104 issued rwts: total=25764,25710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.104 00:27:29.104 Run status group 0 (all jobs): 00:27:29.104 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=101MiB (106MB), run=2005-2005msec 00:27:29.104 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=100MiB (105MB), run=2005-2005msec 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:27:29.104 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:29.105 12:34:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:29.362 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:29.362 fio-3.35 00:27:29.362 Starting 1 thread 00:27:31.893 00:27:31.893 test: (groupid=0, jobs=1): err= 0: pid=285265: Wed Nov 6 12:34:03 2024 00:27:31.893 read: IOPS=7860, BW=123MiB/s (129MB/s)(246MiB/2006msec) 00:27:31.893 slat (usec): min=3, max=126, avg= 4.24, stdev= 1.78 00:27:31.893 clat (usec): min=2474, max=54991, avg=9739.36, stdev=4148.62 00:27:31.893 lat (usec): min=2478, max=54995, avg=9743.61, stdev=4148.62 00:27:31.893 clat percentiles (usec): 00:27:31.893 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7504], 00:27:31.893 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10290], 00:27:31.893 | 70.00th=[10945], 80.00th=[11207], 90.00th=[12125], 95.00th=[12911], 00:27:31.893 | 99.00th=[15533], 99.50th=[47973], 99.90th=[52691], 99.95th=[53216], 00:27:31.893 | 99.99th=[54789] 00:27:31.893 bw ( KiB/s): min=55904, max=75912, per=49.77%, avg=62594.00, stdev=9036.06, samples=4 00:27:31.893 iops : min= 3494, max= 4744, avg=3912.00, stdev=564.51, samples=4 00:27:31.893 write: IOPS=4649, BW=72.6MiB/s (76.2MB/s)(128MiB/1766msec); 0 zone resets 00:27:31.893 slat (usec): min=45, max=239, avg=47.27, stdev= 6.14 00:27:31.893 clat (usec): min=2437, max=19437, avg=11538.56, stdev=2090.92 00:27:31.893 lat (usec): min=2483, max=19483, avg=11585.83, stdev=2090.83 00:27:31.893 clat percentiles (usec): 00:27:31.893 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9896], 00:27:31.893 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:27:31.893 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14222], 95.00th=[15270], 00:27:31.893 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:27:31.893 | 99.99th=[19530] 00:27:31.893 bw ( KiB/s): min=57280, max=79233, per=87.75%, avg=65280.25, stdev=9658.19, samples=4 00:27:31.893 iops : min= 3580, max= 4952, avg=4080.00, stdev=603.61, samples=4 00:27:31.893 lat (msec) : 4=0.37%, 10=44.27%, 20=54.84%, 50=0.31%, 100=0.22% 00:27:31.893 cpu : usr=83.29%, sys=13.52%, ctx=138, majf=0, minf=3 00:27:31.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:31.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.894 issued rwts: total=15769,8211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.894 00:27:31.894 Run status group 0 (all jobs): 00:27:31.894 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2006-2006msec 00:27:31.894 WRITE: bw=72.6MiB/s (76.2MB/s), 72.6MiB/s-72.6MiB/s (76.2MB/s-76.2MB/s), io=128MiB (135MB), run=1766-1766msec 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.894 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.894 rmmod nvme_tcp 00:27:32.152 rmmod nvme_fabrics 00:27:32.152 rmmod nvme_keyring 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 283966 ']' 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 283966 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 283966 ']' 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 283966 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 283966 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 283966' 00:27:32.152 killing process with pid 283966 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 283966 00:27:32.152 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 283966 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.410 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.314 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.314 00:27:34.314 real 0m15.887s 00:27:34.314 user 0m58.618s 00:27:34.314 sys 0m6.082s 00:27:34.314 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:34.314 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.314 ************************************ 00:27:34.314 END TEST nvmf_fio_host 00:27:34.314 ************************************ 00:27:34.574 12:34:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:34.574 12:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:34.574 12:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:34.574 12:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.574 ************************************ 00:27:34.574 START TEST nvmf_failover 00:27:34.574 ************************************ 00:27:34.574 12:34:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:34.574 * Looking for test storage... 00:27:34.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:34.574 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:34.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.575 --rc genhtml_branch_coverage=1 00:27:34.575 --rc genhtml_function_coverage=1 00:27:34.575 --rc genhtml_legend=1 00:27:34.575 --rc geninfo_all_blocks=1 00:27:34.575 --rc geninfo_unexecuted_blocks=1 00:27:34.575 00:27:34.575 ' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:34.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.575 --rc genhtml_branch_coverage=1 00:27:34.575 --rc genhtml_function_coverage=1 00:27:34.575 --rc genhtml_legend=1 00:27:34.575 --rc geninfo_all_blocks=1 00:27:34.575 --rc geninfo_unexecuted_blocks=1 00:27:34.575 00:27:34.575 ' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:34.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.575 --rc genhtml_branch_coverage=1 00:27:34.575 --rc genhtml_function_coverage=1 00:27:34.575 --rc genhtml_legend=1 00:27:34.575 --rc geninfo_all_blocks=1 00:27:34.575 --rc geninfo_unexecuted_blocks=1 00:27:34.575 00:27:34.575 ' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:34.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.575 --rc genhtml_branch_coverage=1 00:27:34.575 --rc genhtml_function_coverage=1 00:27:34.575 --rc genhtml_legend=1 00:27:34.575 --rc geninfo_all_blocks=1 00:27:34.575 --rc geninfo_unexecuted_blocks=1 00:27:34.575 00:27:34.575 ' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:34.575 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:34.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.835 12:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.213 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.213 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.213 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.214 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.214 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:27:40.214 00:27:40.214 --- 10.0.0.2 ping statistics --- 00:27:40.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.214 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:27:40.214 00:27:40.214 --- 10.0.0.1 ping statistics --- 00:27:40.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.214 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=289267 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 289267 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 289267 ']' 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.214 [2024-11-06 12:34:11.513670] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:27:40.214 [2024-11-06 12:34:11.513727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.214 [2024-11-06 12:34:11.584483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.214 [2024-11-06 12:34:11.621261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.214 [2024-11-06 12:34:11.621293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.214 [2024-11-06 12:34:11.621299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.214 [2024-11-06 12:34:11.621304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.214 [2024-11-06 12:34:11.621309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.214 [2024-11-06 12:34:11.622756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.214 [2024-11-06 12:34:11.622869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.214 [2024-11-06 12:34:11.622877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.214 12:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:40.473 [2024-11-06 12:34:12.037603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.473 12:34:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:40.732 Malloc0 00:27:40.990 12:34:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.248 12:34:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.506 12:34:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.764 [2024-11-06 12:34:13.150964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.764 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.022 [2024-11-06 12:34:13.423748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:42.022 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:42.022 [2024-11-06 12:34:13.600296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:42.022 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=289584 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 289584 /var/tmp/bdevperf.sock 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 289584 ']' 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:42.023 12:34:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:42.956 12:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:42.956 12:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:27:42.956 12:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:43.521 NVMe0n1 00:27:43.521 12:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:44.086 00:27:44.086 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=289844 00:27:44.086 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:44.087 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:45.021 12:34:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.279 [2024-11-06 12:34:16.682211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.279 [2024-11-06 12:34:16.682355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.280 [2024-11-06 12:34:16.682360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.280 [2024-11-06 12:34:16.682365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8920f0 is same with the state(6) to be set 00:27:45.280 12:34:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:48.561 12:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:48.561 00:27:48.820 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.820 [2024-11-06 12:34:20.357508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 [2024-11-06 12:34:20.357587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892ef0 is same with the state(6) to be set 00:27:48.820 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:52.154 12:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.155 [2024-11-06 12:34:23.649139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.155 12:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:53.089 12:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:53.347 [2024-11-06 12:34:24.922809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.347 [2024-11-06 12:34:24.922914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.922998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.923003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 [2024-11-06 12:34:24.923009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893c50 is same with the state(6) to be set 00:27:53.348 12:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 289844 00:27:59.914 { 00:27:59.914 "results": [ 00:27:59.914 { 00:27:59.914 "job": "NVMe0n1", 00:27:59.914 "core_mask": "0x1", 00:27:59.914 "workload": "verify", 00:27:59.914 "status": "finished", 00:27:59.914 "verify_range": { 00:27:59.914 "start": 0, 00:27:59.914 "length": 16384 00:27:59.914 }, 00:27:59.914 "queue_depth": 128, 00:27:59.914 "io_size": 4096, 00:27:59.914 "runtime": 15.012382, 00:27:59.914 "iops": 10368.507809087192, 00:27:59.914 "mibps": 40.501983629246844, 00:27:59.914 "io_failed": 8285, 00:27:59.914 "io_timeout": 0, 00:27:59.914 "avg_latency_us": 11687.944293750912, 00:27:59.914 "min_latency_us": 610.6763636363636, 00:27:59.914 "max_latency_us": 15609.483636363637 00:27:59.914 } 00:27:59.914 ], 00:27:59.914 "core_count": 1 00:27:59.914 } 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 289584 ']' 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 289584' 00:27:59.914 killing process with pid 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 289584 00:27:59.914 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.914 [2024-11-06 12:34:13.659107] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:27:59.914 [2024-11-06 12:34:13.659159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289584 ] 00:27:59.914 [2024-11-06 12:34:13.742051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.914 [2024-11-06 12:34:13.791215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.914 Running I/O for 15 seconds... 00:27:59.914 10553.00 IOPS, 41.22 MiB/s [2024-11-06T11:34:31.529Z] [2024-11-06 12:34:16.683774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.914 [2024-11-06 12:34:16.683980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.683992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.914 [2024-11-06 12:34:16.684284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.914 [2024-11-06 12:34:16.684294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.684760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.684986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.684998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.915 [2024-11-06 12:34:16.685118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.915 [2024-11-06 12:34:16.685162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.915 [2024-11-06 12:34:16.685173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.916 [2024-11-06 12:34:16.685866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.685985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.685994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.686008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.686018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.686031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.916 [2024-11-06 12:34:16.686040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.916 [2024-11-06 12:34:16.686052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.917 [2024-11-06 12:34:16.686387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:16.686615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.917 [2024-11-06 12:34:16.686653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.917 [2024-11-06 12:34:16.686663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:27:59.917 [2024-11-06 12:34:16.686673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686729] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:59.917 [2024-11-06 12:34:16.686758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.917 [2024-11-06 12:34:16.686769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.917 [2024-11-06 12:34:16.686790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.917 [2024-11-06 12:34:16.686811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.917 [2024-11-06 12:34:16.686832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:16.686842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:59.917 [2024-11-06 12:34:16.691109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:59.917 [2024-11-06 12:34:16.691145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb830 (9): Bad file descriptor 00:27:59.917 [2024-11-06 12:34:16.849771] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:59.917 9576.00 IOPS, 37.41 MiB/s [2024-11-06T11:34:31.532Z] 9797.00 IOPS, 38.27 MiB/s [2024-11-06T11:34:31.532Z] 9942.50 IOPS, 38.84 MiB/s [2024-11-06T11:34:31.532Z] [2024-11-06 12:34:20.357809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:20.357849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:20.357869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:20.357880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:20.357898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:20.357908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:20.357921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:20.357931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:20.357943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.917 [2024-11-06 12:34:20.357953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.917 [2024-11-06 12:34:20.357966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.357975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.357987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.357997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.918 [2024-11-06 12:34:20.358539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.918 [2024-11-06 12:34:20.358693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.918 [2024-11-06 12:34:20.358705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.358978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.358990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.919 [2024-11-06 12:34:20.359588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.919 [2024-11-06 12:34:20.359597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.359980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.920 [2024-11-06 12:34:20.360299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.920 [2024-11-06 12:34:20.360340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:27:59.920 [2024-11-06 12:34:20.360350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.920 [2024-11-06 12:34:20.360371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.920 [2024-11-06 12:34:20.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:27:59.920 [2024-11-06 12:34:20.360389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.920 [2024-11-06 12:34:20.360407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.920 [2024-11-06 12:34:20.360415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:27:59.920 [2024-11-06 12:34:20.360424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.920 [2024-11-06 12:34:20.360444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.920 [2024-11-06 12:34:20.360453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:27:59.920 [2024-11-06 12:34:20.360468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.920 [2024-11-06 12:34:20.360479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.920 [2024-11-06 12:34:20.360487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.920 [2024-11-06 12:34:20.360495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34368 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34384 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33528 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33536 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.921 [2024-11-06 12:34:20.360911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.921 [2024-11-06 12:34:20.360919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33544 len:8 PRP1 0x0 PRP2 0x0 00:27:59.921 [2024-11-06 12:34:20.360928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.360977] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:59.921 [2024-11-06 12:34:20.361005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:20.361017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.361027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:20.361037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.361049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:20.361059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.361069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:20.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:20.361088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:59.921 [2024-11-06 12:34:20.361128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb830 (9): Bad file descriptor 00:27:59.921 [2024-11-06 12:34:20.365350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:59.921 [2024-11-06 12:34:20.430536] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:59.921 9929.00 IOPS, 38.79 MiB/s [2024-11-06T11:34:31.536Z] 10221.33 IOPS, 39.93 MiB/s [2024-11-06T11:34:31.536Z] 10240.00 IOPS, 40.00 MiB/s [2024-11-06T11:34:31.536Z] 10268.25 IOPS, 40.11 MiB/s [2024-11-06T11:34:31.536Z] 10292.33 IOPS, 40.20 MiB/s [2024-11-06T11:34:31.536Z] [2024-11-06 12:34:24.923370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:24.923411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:24.923437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:24.923465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.921 [2024-11-06 12:34:24.923487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb830 is same with the state(6) to be set 00:27:59.921 [2024-11-06 12:34:24.923570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.921 [2024-11-06 12:34:24.923730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.921 [2024-11-06 12:34:24.923742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.923984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.923994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.922 [2024-11-06 12:34:24.924614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.922 [2024-11-06 12:34:24.924626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.924985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.924997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.923 [2024-11-06 12:34:24.925389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.923 [2024-11-06 12:34:24.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.925988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.925998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.924 [2024-11-06 12:34:24.926021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.924 [2024-11-06 12:34:24.926275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.924 [2024-11-06 12:34:24.926286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.925 [2024-11-06 12:34:24.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.925 [2024-11-06 12:34:24.926329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.925 [2024-11-06 12:34:24.926351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.925 [2024-11-06 12:34:24.926373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.925 [2024-11-06 12:34:24.926406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.925 [2024-11-06 12:34:24.926415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41600 len:8 PRP1 0x0 PRP2 0x0 00:27:59.925 [2024-11-06 12:34:24.926425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.925 [2024-11-06 12:34:24.926481] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:59.925 [2024-11-06 12:34:24.926495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:59.925 [2024-11-06 12:34:24.930717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:59.925 [2024-11-06 12:34:24.930761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb830 (9): Bad file descriptor 00:27:59.925 [2024-11-06 12:34:24.953498] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:59.925 10274.20 IOPS, 40.13 MiB/s [2024-11-06T11:34:31.540Z] 10320.73 IOPS, 40.32 MiB/s [2024-11-06T11:34:31.540Z] 10314.50 IOPS, 40.29 MiB/s [2024-11-06T11:34:31.540Z] 10315.38 IOPS, 40.29 MiB/s [2024-11-06T11:34:31.540Z] 10392.57 IOPS, 40.60 MiB/s 00:27:59.925 Latency(us) 00:27:59.925 [2024-11-06T11:34:31.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.925 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:59.925 Verification LBA range: start 0x0 length 0x4000 00:27:59.925 NVMe0n1 : 15.01 10368.51 40.50 551.88 0.00 11687.94 610.68 15609.48 00:27:59.925 [2024-11-06T11:34:31.540Z] =================================================================================================================== 00:27:59.925 [2024-11-06T11:34:31.540Z] Total : 10368.51 40.50 551.88 0.00 11687.94 610.68 15609.48 00:27:59.925 Received shutdown signal, test time was about 15.000000 seconds 00:27:59.925 00:27:59.925 Latency(us) 00:27:59.925 [2024-11-06T11:34:31.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.925 [2024-11-06T11:34:31.540Z] =================================================================================================================== 00:27:59.925 [2024-11-06T11:34:31.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=292623 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 292623 /var/tmp/bdevperf.sock 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 292623 ']' 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:59.925 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:59.925 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:59.925 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:27:59.925 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:59.925 [2024-11-06 12:34:31.391230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.925 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:00.183 [2024-11-06 12:34:31.563690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:00.183 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:00.442 NVMe0n1 00:28:00.442 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:01.008 00:28:01.008 12:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:01.008 00:28:01.008 12:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.008 12:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:01.265 12:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.524 12:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:04.804 12:34:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.804 12:34:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:04.804 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:04.804 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=293522 00:28:04.804 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 293522 00:28:05.738 { 00:28:05.738 "results": [ 00:28:05.738 { 00:28:05.738 "job": "NVMe0n1", 00:28:05.738 "core_mask": "0x1", 00:28:05.738 "workload": "verify", 00:28:05.738 "status": "finished", 00:28:05.738 "verify_range": { 00:28:05.738 "start": 0, 00:28:05.738 "length": 16384 00:28:05.738 }, 00:28:05.738 "queue_depth": 128, 00:28:05.738 "io_size": 4096, 00:28:05.738 "runtime": 1.005275, 00:28:05.738 "iops": 10515.530576210489, 00:28:05.738 "mibps": 41.07629131332222, 00:28:05.738 "io_failed": 0, 00:28:05.738 "io_timeout": 0, 00:28:05.738 "avg_latency_us": 12108.270347864223, 00:28:05.738 "min_latency_us": 1832.0290909090909, 00:28:05.738 "max_latency_us": 15192.436363636363 00:28:05.738 } 00:28:05.738 ], 00:28:05.738 "core_count": 1 00:28:05.738 } 00:28:05.738 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.738 [2024-11-06 12:34:30.900829] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:28:05.738 [2024-11-06 12:34:30.900882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292623 ] 00:28:05.738 [2024-11-06 12:34:30.982423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.738 [2024-11-06 12:34:31.027214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.738 [2024-11-06 12:34:32.883352] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:05.738 [2024-11-06 12:34:32.883409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.738 [2024-11-06 12:34:32.883425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.738 [2024-11-06 12:34:32.883437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.738 [2024-11-06 12:34:32.883448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.738 [2024-11-06 12:34:32.883467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.738 [2024-11-06 12:34:32.883479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.738 [2024-11-06 12:34:32.883489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.738 [2024-11-06 12:34:32.883499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.738 [2024-11-06 12:34:32.883510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:05.738 [2024-11-06 12:34:32.883542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:05.738 [2024-11-06 12:34:32.883562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cd830 (9): Bad file descriptor 00:28:05.738 [2024-11-06 12:34:32.926684] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:05.738 Running I/O for 1 seconds... 00:28:05.738 10443.00 IOPS, 40.79 MiB/s 00:28:05.738 Latency(us) 00:28:05.738 [2024-11-06T11:34:37.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.738 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:05.738 Verification LBA range: start 0x0 length 0x4000 00:28:05.738 NVMe0n1 : 1.01 10515.53 41.08 0.00 0.00 12108.27 1832.03 15192.44 00:28:05.738 [2024-11-06T11:34:37.353Z] =================================================================================================================== 00:28:05.738 [2024-11-06T11:34:37.353Z] Total : 10515.53 41.08 0.00 0.00 12108.27 1832.03 15192.44 00:28:05.738 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.738 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:05.997 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.255 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.255 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:06.513 12:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.770 12:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 292623 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 292623 ']' 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 292623 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 292623 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 292623' 00:28:10.052 killing process with pid 292623 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 292623 00:28:10.052 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 292623 00:28:10.310 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:10.310 12:34:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.569 rmmod nvme_tcp 00:28:10.569 rmmod nvme_fabrics 00:28:10.569 rmmod nvme_keyring 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 289267 ']' 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 289267 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 289267 ']' 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 289267 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 289267 00:28:10.569 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:10.570 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:10.570 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 289267' 00:28:10.570 killing process with pid 289267 00:28:10.570 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 289267 00:28:10.570 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 289267 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.828 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.829 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.829 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.829 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.829 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.829 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.360 00:28:13.360 real 0m38.434s 00:28:13.360 user 2m6.059s 00:28:13.360 sys 0m7.199s 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:13.360 ************************************ 00:28:13.360 END TEST nvmf_failover 00:28:13.360 ************************************ 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.360 ************************************ 00:28:13.360 START TEST nvmf_host_discovery 00:28:13.360 ************************************ 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:13.360 * Looking for test storage... 00:28:13.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.360 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.361 --rc genhtml_branch_coverage=1 00:28:13.361 --rc genhtml_function_coverage=1 00:28:13.361 --rc genhtml_legend=1 00:28:13.361 --rc geninfo_all_blocks=1 00:28:13.361 --rc geninfo_unexecuted_blocks=1 00:28:13.361 00:28:13.361 ' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.361 --rc genhtml_branch_coverage=1 00:28:13.361 --rc genhtml_function_coverage=1 00:28:13.361 --rc genhtml_legend=1 00:28:13.361 --rc geninfo_all_blocks=1 00:28:13.361 --rc geninfo_unexecuted_blocks=1 00:28:13.361 00:28:13.361 ' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.361 --rc genhtml_branch_coverage=1 00:28:13.361 --rc genhtml_function_coverage=1 00:28:13.361 --rc genhtml_legend=1 00:28:13.361 --rc geninfo_all_blocks=1 00:28:13.361 --rc geninfo_unexecuted_blocks=1 00:28:13.361 00:28:13.361 ' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.361 --rc genhtml_branch_coverage=1 00:28:13.361 --rc genhtml_function_coverage=1 00:28:13.361 --rc genhtml_legend=1 00:28:13.361 --rc geninfo_all_blocks=1 00:28:13.361 --rc geninfo_unexecuted_blocks=1 00:28:13.361 00:28:13.361 ' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.361 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.629 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:18.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:18.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.629 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:18.630 Found net devices under 0000:af:00.0: cvl_0_0 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:18.630 Found net devices under 0000:af:00.1: cvl_0_1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.630 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:28:18.889 00:28:18.889 --- 10.0.0.2 ping statistics --- 00:28:18.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.889 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:28:18.889 00:28:18.889 --- 10.0.0.1 ping statistics --- 00:28:18.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.889 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=298249 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 298249 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 298249 ']' 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.889 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.889 [2024-11-06 12:34:50.375121] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:28:18.889 [2024-11-06 12:34:50.375179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.889 [2024-11-06 12:34:50.446588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.889 [2024-11-06 12:34:50.485663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.889 [2024-11-06 12:34:50.485696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.889 [2024-11-06 12:34:50.485702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.889 [2024-11-06 12:34:50.485708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.889 [2024-11-06 12:34:50.485713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.889 [2024-11-06 12:34:50.486265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 [2024-11-06 12:34:50.640390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 [2024-11-06 12:34:50.652597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 null0 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 null1 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=298314 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 298314 /tmp/host.sock 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 298314 ']' 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:19.148 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:19.148 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.148 [2024-11-06 12:34:50.737143] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:28:19.148 [2024-11-06 12:34:50.737199] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298314 ] 00:28:19.407 [2024-11-06 12:34:50.832911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.407 [2024-11-06 12:34:50.882885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.407 12:34:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.407 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.667 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 [2024-11-06 12:34:51.326310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.926 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.184 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:28:20.184 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:28:20.442 [2024-11-06 12:34:52.058644] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:20.442 [2024-11-06 12:34:52.058672] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:20.442 [2024-11-06 12:34:52.058691] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:20.701 [2024-11-06 12:34:52.144997] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:20.701 [2024-11-06 12:34:52.239756] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:20.701 [2024-11-06 12:34:52.240602] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbf0280:1 started. 00:28:20.701 [2024-11-06 12:34:52.242483] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:20.701 [2024-11-06 12:34:52.242505] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.701 [2024-11-06 12:34:52.247230] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbf0280 was disconnected and freed. delete nvme_qpair. 00:28:20.958 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:20.958 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:20.958 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:20.959 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:21.217 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:21.218 [2024-11-06 12:34:52.763144] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbf0620:1 started. 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:21.218 [2024-11-06 12:34:52.768524] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbf0620 was disconnected and freed. delete nvme_qpair. 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.218 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 [2024-11-06 12:34:52.866395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.477 [2024-11-06 12:34:52.866649] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:21.477 [2024-11-06 12:34:52.866676] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:21.477 [2024-11-06 12:34:52.952957] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:21.477 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.477 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:21.477 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:28:21.736 [2024-11-06 12:34:53.214384] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:21.736 [2024-11-06 12:34:53.214430] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:21.736 [2024-11-06 12:34:53.214442] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:21.736 [2024-11-06 12:34:53.214449] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.675 [2024-11-06 12:34:54.138374] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:22.675 [2024-11-06 12:34:54.138403] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:22.675 [2024-11-06 12:34:54.144640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.675 [2024-11-06 12:34:54.144665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.675 [2024-11-06 12:34:54.144678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.675 [2024-11-06 12:34:54.144688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.675 [2024-11-06 12:34:54.144700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.675 [2024-11-06 12:34:54.144710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.675 [2024-11-06 12:34:54.144721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.675 [2024-11-06 12:34:54.144735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.675 [2024-11-06 12:34:54.144746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.675 [2024-11-06 12:34:54.154651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.675 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.675 [2024-11-06 12:34:54.164692] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.675 [2024-11-06 12:34:54.164708] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.675 [2024-11-06 12:34:54.164715] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.675 [2024-11-06 12:34:54.164722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.675 [2024-11-06 12:34:54.164744] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.675 [2024-11-06 12:34:54.164956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.675 [2024-11-06 12:34:54.164976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.675 [2024-11-06 12:34:54.164987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.675 [2024-11-06 12:34:54.165003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.675 [2024-11-06 12:34:54.165018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.675 [2024-11-06 12:34:54.165027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.675 [2024-11-06 12:34:54.165038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.675 [2024-11-06 12:34:54.165048] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.675 [2024-11-06 12:34:54.165055] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.675 [2024-11-06 12:34:54.165061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.675 [2024-11-06 12:34:54.174778] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.675 [2024-11-06 12:34:54.174793] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.675 [2024-11-06 12:34:54.174799] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.675 [2024-11-06 12:34:54.174805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.675 [2024-11-06 12:34:54.174824] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.675 [2024-11-06 12:34:54.175060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.675 [2024-11-06 12:34:54.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.675 [2024-11-06 12:34:54.175088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.675 [2024-11-06 12:34:54.175103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.675 [2024-11-06 12:34:54.175117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.675 [2024-11-06 12:34:54.175126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.675 [2024-11-06 12:34:54.175137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.675 [2024-11-06 12:34:54.175145] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.675 [2024-11-06 12:34:54.175151] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.675 [2024-11-06 12:34:54.175157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.675 [2024-11-06 12:34:54.184860] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.184879] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.184886] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.184892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.676 [2024-11-06 12:34:54.184912] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.185031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.676 [2024-11-06 12:34:54.185049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.676 [2024-11-06 12:34:54.185060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.676 [2024-11-06 12:34:54.185076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.676 [2024-11-06 12:34:54.185090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.676 [2024-11-06 12:34:54.185100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.676 [2024-11-06 12:34:54.185110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.676 [2024-11-06 12:34:54.185118] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.676 [2024-11-06 12:34:54.185125] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.676 [2024-11-06 12:34:54.185131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.676 [2024-11-06 12:34:54.194946] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.194963] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.194970] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.194984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.676 [2024-11-06 12:34:54.195002] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:22.676 [2024-11-06 12:34:54.195178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.676 [2024-11-06 12:34:54.195196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.676 [2024-11-06 12:34:54.195206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.676 [2024-11-06 12:34:54.195221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.676 [2024-11-06 12:34:54.195235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.676 [2024-11-06 12:34:54.195245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.676 [2024-11-06 12:34:54.195255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.676 [2024-11-06 12:34:54.195263] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.676 [2024-11-06 12:34:54.195269] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.676 [2024-11-06 12:34:54.195275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.676 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.676 [2024-11-06 12:34:54.205036] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.205055] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.205061] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.205068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.676 [2024-11-06 12:34:54.205087] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.205268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.676 [2024-11-06 12:34:54.205285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.676 [2024-11-06 12:34:54.205296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.676 [2024-11-06 12:34:54.205315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.676 [2024-11-06 12:34:54.205329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.676 [2024-11-06 12:34:54.205338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.676 [2024-11-06 12:34:54.205348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.676 [2024-11-06 12:34:54.205356] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.676 [2024-11-06 12:34:54.205363] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.676 [2024-11-06 12:34:54.205369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.676 [2024-11-06 12:34:54.215121] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.215139] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.215145] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.215152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.676 [2024-11-06 12:34:54.215171] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.676 [2024-11-06 12:34:54.215418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.676 [2024-11-06 12:34:54.215429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.676 [2024-11-06 12:34:54.215444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.676 [2024-11-06 12:34:54.215465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.676 [2024-11-06 12:34:54.215475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.676 [2024-11-06 12:34:54.215485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.676 [2024-11-06 12:34:54.215493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.676 [2024-11-06 12:34:54.215500] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.676 [2024-11-06 12:34:54.215506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.676 [2024-11-06 12:34:54.225205] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.225222] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.225228] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.225234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.676 [2024-11-06 12:34:54.225253] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.676 [2024-11-06 12:34:54.225534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.676 [2024-11-06 12:34:54.225550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.676 [2024-11-06 12:34:54.225565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.676 [2024-11-06 12:34:54.225579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.676 [2024-11-06 12:34:54.225588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.676 [2024-11-06 12:34:54.225598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.676 [2024-11-06 12:34:54.225606] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.676 [2024-11-06 12:34:54.225613] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.676 [2024-11-06 12:34:54.225619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.676 [2024-11-06 12:34:54.235288] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.676 [2024-11-06 12:34:54.235304] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.676 [2024-11-06 12:34:54.235310] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.676 [2024-11-06 12:34:54.235317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.677 [2024-11-06 12:34:54.235336] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.235521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.677 [2024-11-06 12:34:54.235538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.677 [2024-11-06 12:34:54.235549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.677 [2024-11-06 12:34:54.235564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.677 [2024-11-06 12:34:54.235578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.677 [2024-11-06 12:34:54.235587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.677 [2024-11-06 12:34:54.235597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.677 [2024-11-06 12:34:54.235605] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.677 [2024-11-06 12:34:54.235612] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.677 [2024-11-06 12:34:54.235618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.677 [2024-11-06 12:34:54.245370] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.677 [2024-11-06 12:34:54.245386] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.677 [2024-11-06 12:34:54.245392] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.245399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.677 [2024-11-06 12:34:54.245417] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.245546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.677 [2024-11-06 12:34:54.245563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.677 [2024-11-06 12:34:54.245574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.677 [2024-11-06 12:34:54.245589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.677 [2024-11-06 12:34:54.245603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.677 [2024-11-06 12:34:54.245613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.677 [2024-11-06 12:34:54.245622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.677 [2024-11-06 12:34:54.245631] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.677 [2024-11-06 12:34:54.245638] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.677 [2024-11-06 12:34:54.245644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:22.677 [2024-11-06 12:34:54.255452] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.677 [2024-11-06 12:34:54.255475] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.677 [2024-11-06 12:34:54.255481] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.255489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.677 [2024-11-06 12:34:54.255509] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.255634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.677 [2024-11-06 12:34:54.255651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.677 [2024-11-06 12:34:54.255663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.677 [2024-11-06 12:34:54.255682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.677 [2024-11-06 12:34:54.255696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.677 [2024-11-06 12:34:54.255707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.677 [2024-11-06 12:34:54.255718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.677 [2024-11-06 12:34:54.255726] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.677 [2024-11-06 12:34:54.255733] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.677 [2024-11-06 12:34:54.255739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.677 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.677 [2024-11-06 12:34:54.265542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:22.677 [2024-11-06 12:34:54.265558] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:22.677 [2024-11-06 12:34:54.265565] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.265571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:22.677 [2024-11-06 12:34:54.265589] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:22.677 [2024-11-06 12:34:54.265792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.677 [2024-11-06 12:34:54.265809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc0890 with addr=10.0.0.2, port=4420 00:28:22.677 [2024-11-06 12:34:54.265819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0890 is same with the state(6) to be set 00:28:22.677 [2024-11-06 12:34:54.265834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0890 (9): Bad file descriptor 00:28:22.677 [2024-11-06 12:34:54.265866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:22.677 [2024-11-06 12:34:54.265879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:22.677 [2024-11-06 12:34:54.265889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:22.677 [2024-11-06 12:34:54.265897] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:22.677 [2024-11-06 12:34:54.265904] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:22.677 [2024-11-06 12:34:54.265910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:22.677 [2024-11-06 12:34:54.266863] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:22.677 [2024-11-06 12:34:54.266884] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.936 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:28:22.936 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.872 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:23.873 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.231 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.243 [2024-11-06 12:34:56.630629] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:25.243 [2024-11-06 12:34:56.630652] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:25.243 [2024-11-06 12:34:56.630668] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:25.243 [2024-11-06 12:34:56.716965] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:25.503 [2024-11-06 12:34:57.017686] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:25.503 [2024-11-06 12:34:57.018517] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xbbe100:1 started. 00:28:25.503 [2024-11-06 12:34:57.020722] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:25.503 [2024-11-06 12:34:57.020759] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.503 [2024-11-06 12:34:57.029772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xbbe100 was disconnected and freed. delete nvme_qpair. 00:28:25.503 request: 00:28:25.503 { 00:28:25.503 "name": "nvme", 00:28:25.503 "trtype": "tcp", 00:28:25.503 "traddr": "10.0.0.2", 00:28:25.503 "adrfam": "ipv4", 00:28:25.503 "trsvcid": "8009", 00:28:25.503 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:25.503 "wait_for_attach": true, 00:28:25.503 "method": "bdev_nvme_start_discovery", 00:28:25.503 "req_id": 1 00:28:25.503 } 00:28:25.503 Got JSON-RPC error response 00:28:25.503 response: 00:28:25.503 { 00:28:25.503 "code": -17, 00:28:25.503 "message": "File exists" 00:28:25.503 } 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:25.503 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 request: 00:28:25.763 { 00:28:25.763 "name": "nvme_second", 00:28:25.763 "trtype": "tcp", 00:28:25.763 "traddr": "10.0.0.2", 00:28:25.763 "adrfam": "ipv4", 00:28:25.763 "trsvcid": "8009", 00:28:25.763 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:25.763 "wait_for_attach": true, 00:28:25.763 "method": "bdev_nvme_start_discovery", 00:28:25.763 "req_id": 1 00:28:25.763 } 00:28:25.763 Got JSON-RPC error response 00:28:25.763 response: 00:28:25.763 { 00:28:25.763 "code": -17, 00:28:25.763 "message": "File exists" 00:28:25.763 } 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.763 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:26.699 [2024-11-06 12:34:58.272323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-11-06 12:34:58.272361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1f90 with addr=10.0.0.2, port=8010 00:28:26.699 [2024-11-06 12:34:58.272382] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:26.699 [2024-11-06 12:34:58.272391] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:26.699 [2024-11-06 12:34:58.272400] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:28.076 [2024-11-06 12:34:59.274750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.076 [2024-11-06 12:34:59.274783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1f90 with addr=10.0.0.2, port=8010 00:28:28.076 [2024-11-06 12:34:59.274801] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:28.076 [2024-11-06 12:34:59.274810] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:28.076 [2024-11-06 12:34:59.274819] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:29.012 [2024-11-06 12:35:00.276900] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:29.012 request: 00:28:29.012 { 00:28:29.012 "name": "nvme_second", 00:28:29.012 "trtype": "tcp", 00:28:29.012 "traddr": "10.0.0.2", 00:28:29.012 "adrfam": "ipv4", 00:28:29.012 "trsvcid": "8010", 00:28:29.012 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:29.012 "wait_for_attach": false, 00:28:29.012 "attach_timeout_ms": 3000, 00:28:29.012 "method": "bdev_nvme_start_discovery", 00:28:29.012 "req_id": 1 00:28:29.012 } 00:28:29.012 Got JSON-RPC error response 00:28:29.012 response: 00:28:29.012 { 00:28:29.012 "code": -110, 00:28:29.012 "message": "Connection timed out" 00:28:29.012 } 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:29.012 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 298314 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.013 rmmod nvme_tcp 00:28:29.013 rmmod nvme_fabrics 00:28:29.013 rmmod nvme_keyring 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 298249 ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 298249 ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 298249' 00:28:29.013 killing process with pid 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 298249 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.013 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.271 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.271 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.271 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.271 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.271 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.174 00:28:31.174 real 0m18.241s 00:28:31.174 user 0m23.283s 00:28:31.174 sys 0m5.724s 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:31.174 ************************************ 00:28:31.174 END TEST nvmf_host_discovery 00:28:31.174 ************************************ 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.174 ************************************ 00:28:31.174 START TEST nvmf_host_multipath_status 00:28:31.174 ************************************ 00:28:31.174 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:31.434 * Looking for test storage... 00:28:31.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.434 --rc genhtml_branch_coverage=1 00:28:31.434 --rc genhtml_function_coverage=1 00:28:31.434 --rc genhtml_legend=1 00:28:31.434 --rc geninfo_all_blocks=1 00:28:31.434 --rc geninfo_unexecuted_blocks=1 00:28:31.434 00:28:31.434 ' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.434 --rc genhtml_branch_coverage=1 00:28:31.434 --rc genhtml_function_coverage=1 00:28:31.434 --rc genhtml_legend=1 00:28:31.434 --rc geninfo_all_blocks=1 00:28:31.434 --rc geninfo_unexecuted_blocks=1 00:28:31.434 00:28:31.434 ' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.434 --rc genhtml_branch_coverage=1 00:28:31.434 --rc genhtml_function_coverage=1 00:28:31.434 --rc genhtml_legend=1 00:28:31.434 --rc geninfo_all_blocks=1 00:28:31.434 --rc geninfo_unexecuted_blocks=1 00:28:31.434 00:28:31.434 ' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.434 --rc genhtml_branch_coverage=1 00:28:31.434 --rc genhtml_function_coverage=1 00:28:31.434 --rc genhtml_legend=1 00:28:31.434 --rc geninfo_all_blocks=1 00:28:31.434 --rc geninfo_unexecuted_blocks=1 00:28:31.434 00:28:31.434 ' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.434 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:31.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.435 12:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:36.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:36.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:36.705 Found net devices under 0000:af:00.0: cvl_0_0 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.705 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:36.706 Found net devices under 0000:af:00.1: cvl_0_1 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.706 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:28:36.965 00:28:36.965 --- 10.0.0.2 ping statistics --- 00:28:36.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.965 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:28:36.965 00:28:36.965 --- 10.0.0.1 ping statistics --- 00:28:36.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.965 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=303793 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 303793 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 303793 ']' 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.965 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:37.224 [2024-11-06 12:35:08.612269] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:28:37.224 [2024-11-06 12:35:08.612329] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.224 [2024-11-06 12:35:08.714862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.224 [2024-11-06 12:35:08.763485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.224 [2024-11-06 12:35:08.763527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.224 [2024-11-06 12:35:08.763537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.224 [2024-11-06 12:35:08.763546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.224 [2024-11-06 12:35:08.763554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.224 [2024-11-06 12:35:08.765033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.224 [2024-11-06 12:35:08.765040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=303793 00:28:37.482 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:37.741 [2024-11-06 12:35:09.160852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.741 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:38.000 Malloc0 00:28:38.000 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:38.285 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.544 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.802 [2024-11-06 12:35:10.273773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.802 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:39.061 [2024-11-06 12:35:10.558630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=304085 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 304085 /var/tmp/bdevperf.sock 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 304085 ']' 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:39.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.061 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:39.320 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:39.320 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:28:39.320 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:39.579 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:40.147 Nvme0n1 00:28:40.148 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:40.717 Nvme0n1 00:28:40.717 12:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:40.717 12:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:42.621 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:42.621 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:42.879 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:43.138 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:44.074 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:44.074 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:44.074 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.074 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:44.333 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.333 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:44.333 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:44.333 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.592 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:44.592 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:44.592 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.592 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:44.851 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.851 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:44.851 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:44.851 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.418 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.418 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:45.418 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.418 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:45.418 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.418 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:45.418 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:45.418 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.677 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.677 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:45.677 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:46.244 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:46.244 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:47.182 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:47.182 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:47.182 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.182 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:47.441 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:47.441 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:47.441 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.441 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:47.699 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.699 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:47.699 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.699 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.264 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:48.522 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.779 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:48.779 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.779 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.035 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.035 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:49.035 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:49.292 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:49.549 12:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:50.483 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:50.483 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:50.483 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.483 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:50.741 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.741 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:50.741 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.741 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:50.999 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:50.999 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:50.999 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:50.999 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.257 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.257 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:51.257 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.257 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.824 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:52.390 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:52.390 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:52.390 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:52.390 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:52.957 12:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:53.890 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:53.890 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:53.890 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.890 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:54.147 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.147 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:54.147 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.147 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:54.405 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:54.405 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:54.405 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.405 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:54.664 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.664 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:54.664 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.664 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:54.922 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.922 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:54.922 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.922 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:55.181 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.181 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:55.181 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.181 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:55.439 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:55.439 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:55.439 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:55.697 12:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:55.956 12:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.329 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:57.586 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:57.586 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:57.586 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.586 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:57.843 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.843 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:57.843 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.843 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:58.100 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.100 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:58.100 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.100 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:58.357 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:58.357 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:58.357 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.357 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:58.615 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:58.615 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:58.615 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:58.873 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:59.438 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:00.371 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:00.371 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:00.371 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.371 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:00.371 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.372 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:00.372 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.372 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:00.629 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:00.629 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:00.629 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.629 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:01.195 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.453 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:01.453 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:01.453 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.453 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:01.711 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.711 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:01.969 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:01.969 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:02.227 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:02.485 12:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.858 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:04.116 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.116 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:04.117 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.117 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:04.374 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.375 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:04.375 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:04.375 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.633 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.633 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:04.633 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.633 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:04.891 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.891 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:04.891 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.891 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:05.149 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.149 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:05.149 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:05.715 12:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:05.715 12:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:06.649 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:06.649 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:06.649 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:06.649 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:06.906 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:06.906 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:06.906 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:06.906 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:07.472 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.472 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:07.472 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.472 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:07.472 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.472 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:07.472 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.472 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:07.729 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.729 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:07.987 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.987 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:08.244 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.244 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:08.244 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.244 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:08.501 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.501 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:08.501 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:08.759 12:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:09.017 12:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:09.952 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:09.952 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:09.952 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:09.952 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:10.210 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:10.210 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:10.210 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.210 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:10.469 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:10.469 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:10.469 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.469 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:10.727 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:10.727 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:10.727 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:10.727 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:11.293 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:11.550 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:11.550 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:11.550 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:12.116 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:12.374 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:13.308 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:13.308 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:13.308 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.308 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:13.565 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:13.565 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:13.565 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.565 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:13.823 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:13.823 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:13.823 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:13.823 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:14.081 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:14.081 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:14.081 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.081 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:14.339 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:14.339 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:14.339 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.339 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:14.597 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:14.597 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:14.597 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:14.597 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 304085 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 304085 ']' 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 304085 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.855 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 304085 00:29:15.137 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:29:15.137 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:29:15.138 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 304085' 00:29:15.138 killing process with pid 304085 00:29:15.138 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 304085 00:29:15.138 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 304085 00:29:15.138 { 00:29:15.138 "results": [ 00:29:15.138 { 00:29:15.138 "job": "Nvme0n1", 00:29:15.138 "core_mask": "0x4", 00:29:15.138 "workload": "verify", 00:29:15.138 "status": "terminated", 00:29:15.138 "verify_range": { 00:29:15.138 "start": 0, 00:29:15.138 "length": 16384 00:29:15.138 }, 00:29:15.138 "queue_depth": 128, 00:29:15.138 "io_size": 4096, 00:29:15.138 "runtime": 34.282304, 00:29:15.138 "iops": 8999.482648540776, 00:29:15.138 "mibps": 35.154229095862405, 00:29:15.138 "io_failed": 0, 00:29:15.138 "io_timeout": 0, 00:29:15.138 "avg_latency_us": 14189.166399699683, 00:29:15.138 "min_latency_us": 113.10545454545455, 00:29:15.138 "max_latency_us": 4087539.898181818 00:29:15.138 } 00:29:15.138 ], 00:29:15.138 "core_count": 1 00:29:15.138 } 00:29:15.138 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 304085 00:29:15.138 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:15.138 [2024-11-06 12:35:10.624919] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:29:15.138 [2024-11-06 12:35:10.624972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304085 ] 00:29:15.138 [2024-11-06 12:35:10.678690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.138 [2024-11-06 12:35:10.719791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.138 Running I/O for 90 seconds... 00:29:15.138 7884.00 IOPS, 30.80 MiB/s [2024-11-06T11:35:46.753Z] 7910.00 IOPS, 30.90 MiB/s [2024-11-06T11:35:46.753Z] 7918.67 IOPS, 30.93 MiB/s [2024-11-06T11:35:46.753Z] 7923.00 IOPS, 30.95 MiB/s [2024-11-06T11:35:46.753Z] 7930.80 IOPS, 30.98 MiB/s [2024-11-06T11:35:46.753Z] 8380.00 IOPS, 32.73 MiB/s [2024-11-06T11:35:46.753Z] 8918.71 IOPS, 34.84 MiB/s [2024-11-06T11:35:46.753Z] 9305.12 IOPS, 36.35 MiB/s [2024-11-06T11:35:46.753Z] 9594.89 IOPS, 37.48 MiB/s [2024-11-06T11:35:46.753Z] 9424.00 IOPS, 36.81 MiB/s [2024-11-06T11:35:46.753Z] 9297.00 IOPS, 36.32 MiB/s [2024-11-06T11:35:46.753Z] 9182.50 IOPS, 35.87 MiB/s [2024-11-06T11:35:46.753Z] 9092.54 IOPS, 35.52 MiB/s [2024-11-06T11:35:46.753Z] 9010.36 IOPS, 35.20 MiB/s [2024-11-06T11:35:46.753Z] 8937.73 IOPS, 34.91 MiB/s [2024-11-06T11:35:46.753Z] [2024-11-06 12:35:27.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.138 [2024-11-06 12:35:27.246934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.246986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.138 [2024-11-06 12:35:27.247569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.138 [2024-11-06 12:35:27.247576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.247985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.247996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.139 [2024-11-06 12:35:27.248265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.139 [2024-11-06 12:35:27.248276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.248988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.248994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.140 [2024-11-06 12:35:27.249609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.140 [2024-11-06 12:35:27.249621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.249954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.249960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.250328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.250988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.250999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.251005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.141 [2024-11-06 12:35:27.251023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.141 [2024-11-06 12:35:27.251181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.141 [2024-11-06 12:35:27.251192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.142 [2024-11-06 12:35:27.251198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.251874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.251880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.142 [2024-11-06 12:35:27.252566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.142 [2024-11-06 12:35:27.252577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.252990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.252996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.143 [2024-11-06 12:35:27.254482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.143 [2024-11-06 12:35:27.254493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.254888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.254894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.255993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.255999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.144 [2024-11-06 12:35:27.256376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.144 [2024-11-06 12:35:27.256387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.256393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.256404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.256410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.256421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.256427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.256438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.256444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.256465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.256476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.145 [2024-11-06 12:35:27.257241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.257449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.145 [2024-11-06 12:35:27.259827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.145 [2024-11-06 12:35:27.259838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.259844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.259855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.259861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.146 [2024-11-06 12:35:27.260540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.146 [2024-11-06 12:35:27.260546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.260819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.260825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.147 [2024-11-06 12:35:27.261797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.147 [2024-11-06 12:35:27.261809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.261815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.261985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.261995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.148 [2024-11-06 12:35:27.262759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-11-06 12:35:27.262864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.148 [2024-11-06 12:35:27.262875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-11-06 12:35:27.262881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.262892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-11-06 12:35:27.262899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.262910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-11-06 12:35:27.262916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.262929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-11-06 12:35:27.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.263777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.263785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.264029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.264037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.264049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.264055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.264066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.264072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.149 [2024-11-06 12:35:27.264083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.149 [2024-11-06 12:35:27.264089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.264900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.264906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.150 [2024-11-06 12:35:27.265934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.150 [2024-11-06 12:35:27.265946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.265952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.265963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.265969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.265987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.265998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.151 [2024-11-06 12:35:27.267142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.151 [2024-11-06 12:35:27.267148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.267981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.267988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-11-06 12:35:27.268791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.152 [2024-11-06 12:35:27.268819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.152 [2024-11-06 12:35:27.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.268955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.268963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.153 [2024-11-06 12:35:27.271763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.153 [2024-11-06 12:35:27.271769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.271989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.271995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.272317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.272323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.154 [2024-11-06 12:35:27.273119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.154 [2024-11-06 12:35:27.273126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-11-06 12:35:27.273936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.155 [2024-11-06 12:35:27.273953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.155 [2024-11-06 12:35:27.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.273971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-11-06 12:35:27.274582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.274943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.274950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.275173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.275193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.275200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.275211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.275217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.275228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.275234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.156 [2024-11-06 12:35:27.275246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.156 [2024-11-06 12:35:27.275252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.275986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.275992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.276810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.276816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.277066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.277075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.277088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.277096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.277108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.157 [2024-11-06 12:35:27.277114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.157 [2024-11-06 12:35:27.277125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.277951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.277957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.158 [2024-11-06 12:35:27.278990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.158 [2024-11-06 12:35:27.278997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.279487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.279498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.279507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-11-06 12:35:27.280355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.280491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.280499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.159 [2024-11-06 12:35:27.282711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.159 [2024-11-06 12:35:27.282717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.282882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.282888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.160 [2024-11-06 12:35:27.283520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.160 [2024-11-06 12:35:27.283531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.283840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.283846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.161 [2024-11-06 12:35:27.284986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.161 [2024-11-06 12:35:27.284995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-11-06 12:35:27.285487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.162 [2024-11-06 12:35:27.285957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-11-06 12:35:27.285975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.285986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-11-06 12:35:27.285992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.286004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-11-06 12:35:27.286010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.286021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-11-06 12:35:27.286027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.162 [2024-11-06 12:35:27.286038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-11-06 12:35:27.286132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.286821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.286827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.163 [2024-11-06 12:35:27.287490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.163 [2024-11-06 12:35:27.287501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.287942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.287948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.288988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.164 [2024-11-06 12:35:27.289407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.164 [2024-11-06 12:35:27.289413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.289812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.165 [2024-11-06 12:35:27.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.165 [2024-11-06 12:35:27.290970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.291294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.291950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.291970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.291988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.291999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.166 [2024-11-06 12:35:27.292130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.292374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.166 [2024-11-06 12:35:27.294692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.166 [2024-11-06 12:35:27.294704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.294984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.294990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.167 [2024-11-06 12:35:27.295524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.167 [2024-11-06 12:35:27.295536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.295749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.295755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.168 [2024-11-06 12:35:27.296859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.168 [2024-11-06 12:35:27.296865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.296983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.296995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.169 [2024-11-06 12:35:27.297581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.169 [2024-11-06 12:35:27.297905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.169 [2024-11-06 12:35:27.297910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.297922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.297928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.297939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.297956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.297962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.297974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.297980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.297991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.297997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.298980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.298989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.170 [2024-11-06 12:35:27.299739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.170 [2024-11-06 12:35:27.299750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.299768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.299785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.299802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.299819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.299836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.299842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.300978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.300984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.171 [2024-11-06 12:35:27.301616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.171 [2024-11-06 12:35:27.301628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.301978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.301988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.172 [2024-11-06 12:35:27.302861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.302928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.302934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.303424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.303443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.172 [2024-11-06 12:35:27.303467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.172 [2024-11-06 12:35:27.303485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.172 [2024-11-06 12:35:27.303502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.172 [2024-11-06 12:35:27.303520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.172 [2024-11-06 12:35:27.303538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:15.172 [2024-11-06 12:35:27.303550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.173 [2024-11-06 12:35:27.303648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.303666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.303683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.303701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.303719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.303762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.303769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.173 [2024-11-06 12:35:27.306622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:15.173 [2024-11-06 12:35:27.306635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.306987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.306993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:15.174 [2024-11-06 12:35:27.307301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.174 [2024-11-06 12:35:27.307307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.307981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.307988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.308003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.308009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:27.308025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:27.308031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:15.175 8394.44 IOPS, 32.79 MiB/s [2024-11-06T11:35:46.790Z] 7900.65 IOPS, 30.86 MiB/s [2024-11-06T11:35:46.790Z] 7461.72 IOPS, 29.15 MiB/s [2024-11-06T11:35:46.790Z] 7069.00 IOPS, 27.61 MiB/s [2024-11-06T11:35:46.790Z] 7278.10 IOPS, 28.43 MiB/s [2024-11-06T11:35:46.790Z] 7514.48 IOPS, 29.35 MiB/s [2024-11-06T11:35:46.790Z] 7731.64 IOPS, 30.20 MiB/s [2024-11-06T11:35:46.790Z] 7932.35 IOPS, 30.99 MiB/s [2024-11-06T11:35:46.790Z] 8115.33 IOPS, 31.70 MiB/s [2024-11-06T11:35:46.790Z] 8286.16 IOPS, 32.37 MiB/s [2024-11-06T11:35:46.790Z] 8436.46 IOPS, 32.95 MiB/s [2024-11-06T11:35:46.790Z] 8578.30 IOPS, 33.51 MiB/s [2024-11-06T11:35:46.790Z] 8699.21 IOPS, 33.98 MiB/s [2024-11-06T11:35:46.790Z] 8824.83 IOPS, 34.47 MiB/s [2024-11-06T11:35:46.790Z] 8943.23 IOPS, 34.93 MiB/s [2024-11-06T11:35:46.790Z] 9054.68 IOPS, 35.37 MiB/s [2024-11-06T11:35:46.790Z] [2024-11-06 12:35:43.736904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:43.736942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:43.736975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:43.736983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:43.737771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:43.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:43.737804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:43.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:43.737822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.175 [2024-11-06 12:35:43.737829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:15.175 [2024-11-06 12:35:43.737841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.175 [2024-11-06 12:35:43.737847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:15.175 9085.34 IOPS, 35.49 MiB/s [2024-11-06T11:35:46.790Z] 9050.45 IOPS, 35.35 MiB/s [2024-11-06T11:35:46.790Z] 9017.68 IOPS, 35.23 MiB/s [2024-11-06T11:35:46.791Z] Received shutdown signal, test time was about 34.282886 seconds 00:29:15.176 00:29:15.176 Latency(us) 00:29:15.176 [2024-11-06T11:35:46.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.176 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:15.176 Verification LBA range: start 0x0 length 0x4000 00:29:15.176 Nvme0n1 : 34.28 8999.48 35.15 0.00 0.00 14189.17 113.11 4087539.90 00:29:15.176 [2024-11-06T11:35:46.791Z] =================================================================================================================== 00:29:15.176 [2024-11-06T11:35:46.791Z] Total : 8999.48 35.15 0.00 0.00 14189.17 113.11 4087539.90 00:29:15.176 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.433 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.433 rmmod nvme_tcp 00:29:15.433 rmmod nvme_fabrics 00:29:15.433 rmmod nvme_keyring 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 303793 ']' 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 303793 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 303793 ']' 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 303793 00:29:15.433 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 303793 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 303793' 00:29:15.691 killing process with pid 303793 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 303793 00:29:15.691 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 303793 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.949 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.852 00:29:17.852 real 0m46.626s 00:29:17.852 user 2m13.040s 00:29:17.852 sys 0m12.978s 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:17.852 ************************************ 00:29:17.852 END TEST nvmf_host_multipath_status 00:29:17.852 ************************************ 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.852 ************************************ 00:29:17.852 START TEST nvmf_discovery_remove_ifc 00:29:17.852 ************************************ 00:29:17.852 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:18.111 * Looking for test storage... 00:29:18.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.111 --rc genhtml_branch_coverage=1 00:29:18.111 --rc genhtml_function_coverage=1 00:29:18.111 --rc genhtml_legend=1 00:29:18.111 --rc geninfo_all_blocks=1 00:29:18.111 --rc geninfo_unexecuted_blocks=1 00:29:18.111 00:29:18.111 ' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.111 --rc genhtml_branch_coverage=1 00:29:18.111 --rc genhtml_function_coverage=1 00:29:18.111 --rc genhtml_legend=1 00:29:18.111 --rc geninfo_all_blocks=1 00:29:18.111 --rc geninfo_unexecuted_blocks=1 00:29:18.111 00:29:18.111 ' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.111 --rc genhtml_branch_coverage=1 00:29:18.111 --rc genhtml_function_coverage=1 00:29:18.111 --rc genhtml_legend=1 00:29:18.111 --rc geninfo_all_blocks=1 00:29:18.111 --rc geninfo_unexecuted_blocks=1 00:29:18.111 00:29:18.111 ' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.111 --rc genhtml_branch_coverage=1 00:29:18.111 --rc genhtml_function_coverage=1 00:29:18.111 --rc genhtml_legend=1 00:29:18.111 --rc geninfo_all_blocks=1 00:29:18.111 --rc geninfo_unexecuted_blocks=1 00:29:18.111 00:29:18.111 ' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.111 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.112 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:23.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:23.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:23.378 Found net devices under 0000:af:00.0: cvl_0_0 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:23.378 Found net devices under 0000:af:00.1: cvl_0_1 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.378 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.379 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.379 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.379 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.379 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.379 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:29:23.637 00:29:23.637 --- 10.0.0.2 ping statistics --- 00:29:23.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.637 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:29:23.637 00:29:23.637 --- 10.0.0.1 ping statistics --- 00:29:23.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.637 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.637 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.638 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.638 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=314288 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 314288 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 314288 ']' 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:23.896 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.896 [2024-11-06 12:35:55.308119] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:29:23.896 [2024-11-06 12:35:55.308163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.896 [2024-11-06 12:35:55.365426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.896 [2024-11-06 12:35:55.406725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.896 [2024-11-06 12:35:55.406758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.896 [2024-11-06 12:35:55.406765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.896 [2024-11-06 12:35:55.406771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.896 [2024-11-06 12:35:55.406776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.896 [2024-11-06 12:35:55.407324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.154 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.155 [2024-11-06 12:35:55.608968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.155 [2024-11-06 12:35:55.617152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:24.155 null0 00:29:24.155 [2024-11-06 12:35:55.649132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=314311 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 314311 /tmp/host.sock 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 314311 ']' 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:24.155 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:24.155 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.155 [2024-11-06 12:35:55.724819] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:29:24.155 [2024-11-06 12:35:55.724875] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314311 ] 00:29:24.413 [2024-11-06 12:35:55.819941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.413 [2024-11-06 12:35:55.870356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.413 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.671 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.671 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:24.671 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.671 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:25.605 [2024-11-06 12:35:57.096972] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:25.605 [2024-11-06 12:35:57.096995] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:25.605 [2024-11-06 12:35:57.097012] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:25.863 [2024-11-06 12:35:57.223465] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:25.863 [2024-11-06 12:35:57.408701] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:25.863 [2024-11-06 12:35:57.409681] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1035300:1 started. 00:29:25.863 [2024-11-06 12:35:57.411560] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:25.863 [2024-11-06 12:35:57.411612] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:25.863 [2024-11-06 12:35:57.411637] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:25.863 [2024-11-06 12:35:57.411653] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:25.863 [2024-11-06 12:35:57.411677] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:25.863 [2024-11-06 12:35:57.415055] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1035300 was disconnected and freed. delete nvme_qpair. 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:25.863 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:26.121 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:27.053 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.311 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:27.311 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:28.245 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:29.178 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.179 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:29.179 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:30.550 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.483 [2024-11-06 12:36:02.852886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:31.483 [2024-11-06 12:36:02.852934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.483 [2024-11-06 12:36:02.852950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.483 [2024-11-06 12:36:02.852962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.483 [2024-11-06 12:36:02.852972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.483 [2024-11-06 12:36:02.852983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.483 [2024-11-06 12:36:02.852992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.483 [2024-11-06 12:36:02.853003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.483 [2024-11-06 12:36:02.853013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.483 [2024-11-06 12:36:02.853024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.483 [2024-11-06 12:36:02.853033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.483 [2024-11-06 12:36:02.853043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1011b50 is same with the state(6) to be set 00:29:31.483 [2024-11-06 12:36:02.862906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1011b50 (9): Bad file descriptor 00:29:31.483 [2024-11-06 12:36:02.872949] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:31.483 [2024-11-06 12:36:02.872965] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:31.483 [2024-11-06 12:36:02.872972] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:31.483 [2024-11-06 12:36:02.872979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:31.483 [2024-11-06 12:36:02.873007] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:31.483 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:32.417 [2024-11-06 12:36:03.894511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:32.417 [2024-11-06 12:36:03.894594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1011b50 with addr=10.0.0.2, port=4420 00:29:32.417 [2024-11-06 12:36:03.894627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1011b50 is same with the state(6) to be set 00:29:32.417 [2024-11-06 12:36:03.894685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1011b50 (9): Bad file descriptor 00:29:32.417 [2024-11-06 12:36:03.895652] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:32.417 [2024-11-06 12:36:03.895716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:32.417 [2024-11-06 12:36:03.895740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:32.417 [2024-11-06 12:36:03.895763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:32.417 [2024-11-06 12:36:03.895783] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:32.417 [2024-11-06 12:36:03.895799] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:32.417 [2024-11-06 12:36:03.895812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:32.417 [2024-11-06 12:36:03.895834] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:32.417 [2024-11-06 12:36:03.895849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:32.417 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:33.350 [2024-11-06 12:36:04.898370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:33.351 [2024-11-06 12:36:04.898399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:33.351 [2024-11-06 12:36:04.898416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:33.351 [2024-11-06 12:36:04.898426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:33.351 [2024-11-06 12:36:04.898436] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:33.351 [2024-11-06 12:36:04.898446] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:33.351 [2024-11-06 12:36:04.898453] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:33.351 [2024-11-06 12:36:04.898463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:33.351 [2024-11-06 12:36:04.898494] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:33.351 [2024-11-06 12:36:04.898523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.351 [2024-11-06 12:36:04.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.351 [2024-11-06 12:36:04.898568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.351 [2024-11-06 12:36:04.898578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.351 [2024-11-06 12:36:04.898589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.351 [2024-11-06 12:36:04.898599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.351 [2024-11-06 12:36:04.898610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.351 [2024-11-06 12:36:04.898620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.351 [2024-11-06 12:36:04.898631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.351 [2024-11-06 12:36:04.898641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.351 [2024-11-06 12:36:04.898650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:33.351 [2024-11-06 12:36:04.899308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1000e50 (9): Bad file descriptor 00:29:33.351 [2024-11-06 12:36:04.900321] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:33.351 [2024-11-06 12:36:04.900335] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:33.351 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.609 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:33.609 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.609 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:33.609 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:34.541 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.799 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:34.799 12:36:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:35.363 [2024-11-06 12:36:06.910517] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:35.363 [2024-11-06 12:36:06.910538] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:35.363 [2024-11-06 12:36:06.910556] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.621 [2024-11-06 12:36:07.037011] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:35.621 [2024-11-06 12:36:07.131831] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:35.621 [2024-11-06 12:36:07.132643] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1005c70:1 started. 00:29:35.621 [2024-11-06 12:36:07.134084] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:35.621 [2024-11-06 12:36:07.134126] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:35.621 [2024-11-06 12:36:07.134150] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:35.621 [2024-11-06 12:36:07.134168] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:35.621 [2024-11-06 12:36:07.134178] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:35.621 [2024-11-06 12:36:07.139918] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1005c70 was disconnected and freed. delete nvme_qpair. 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 314311 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 314311 ']' 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 314311 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:35.621 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 314311 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 314311' 00:29:35.879 killing process with pid 314311 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 314311 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 314311 00:29:35.879 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.880 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.880 rmmod nvme_tcp 00:29:35.880 rmmod nvme_fabrics 00:29:35.880 rmmod nvme_keyring 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 314288 ']' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 314288 ']' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 314288' 00:29:36.139 killing process with pid 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 314288 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.139 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.672 00:29:38.672 real 0m20.350s 00:29:38.672 user 0m25.122s 00:29:38.672 sys 0m5.620s 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:38.672 ************************************ 00:29:38.672 END TEST nvmf_discovery_remove_ifc 00:29:38.672 ************************************ 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.672 ************************************ 00:29:38.672 START TEST nvmf_identify_kernel_target 00:29:38.672 ************************************ 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:38.672 * Looking for test storage... 00:29:38.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:29:38.672 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.672 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:38.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.672 --rc genhtml_branch_coverage=1 00:29:38.672 --rc genhtml_function_coverage=1 00:29:38.672 --rc genhtml_legend=1 00:29:38.672 --rc geninfo_all_blocks=1 00:29:38.672 --rc geninfo_unexecuted_blocks=1 00:29:38.672 00:29:38.672 ' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.673 --rc genhtml_branch_coverage=1 00:29:38.673 --rc genhtml_function_coverage=1 00:29:38.673 --rc genhtml_legend=1 00:29:38.673 --rc geninfo_all_blocks=1 00:29:38.673 --rc geninfo_unexecuted_blocks=1 00:29:38.673 00:29:38.673 ' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.673 --rc genhtml_branch_coverage=1 00:29:38.673 --rc genhtml_function_coverage=1 00:29:38.673 --rc genhtml_legend=1 00:29:38.673 --rc geninfo_all_blocks=1 00:29:38.673 --rc geninfo_unexecuted_blocks=1 00:29:38.673 00:29:38.673 ' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:38.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.673 --rc genhtml_branch_coverage=1 00:29:38.673 --rc genhtml_function_coverage=1 00:29:38.673 --rc genhtml_legend=1 00:29:38.673 --rc geninfo_all_blocks=1 00:29:38.673 --rc geninfo_unexecuted_blocks=1 00:29:38.673 00:29:38.673 ' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.673 12:36:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:43.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:43.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:43.946 Found net devices under 0000:af:00.0: cvl_0_0 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:43.946 Found net devices under 0000:af:00.1: cvl_0_1 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.946 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:29:43.947 00:29:43.947 --- 10.0.0.2 ping statistics --- 00:29:43.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.947 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:29:43.947 00:29:43.947 --- 10.0.0.1 ping statistics --- 00:29:43.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.947 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.947 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.206 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:44.207 12:36:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:46.743 Waiting for block devices as requested 00:29:46.743 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:29:46.743 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:46.743 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:46.743 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:46.743 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:46.743 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:47.001 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:47.001 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:47.001 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:47.001 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:47.260 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:47.260 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:47.260 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:47.519 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:47.519 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:47.519 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:47.519 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:47.778 No valid GPT data, bailing 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:47.778 00:29:47.778 Discovery Log Number of Records 2, Generation counter 2 00:29:47.778 =====Discovery Log Entry 0====== 00:29:47.778 trtype: tcp 00:29:47.778 adrfam: ipv4 00:29:47.778 subtype: current discovery subsystem 00:29:47.778 treq: not specified, sq flow control disable supported 00:29:47.778 portid: 1 00:29:47.778 trsvcid: 4420 00:29:47.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:47.778 traddr: 10.0.0.1 00:29:47.778 eflags: none 00:29:47.778 sectype: none 00:29:47.778 =====Discovery Log Entry 1====== 00:29:47.778 trtype: tcp 00:29:47.778 adrfam: ipv4 00:29:47.778 subtype: nvme subsystem 00:29:47.778 treq: not specified, sq flow control disable supported 00:29:47.778 portid: 1 00:29:47.778 trsvcid: 4420 00:29:47.778 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:47.778 traddr: 10.0.0.1 00:29:47.778 eflags: none 00:29:47.778 sectype: none 00:29:47.778 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:47.778 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:48.038 ===================================================== 00:29:48.038 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:48.038 ===================================================== 00:29:48.038 Controller Capabilities/Features 00:29:48.038 ================================ 00:29:48.038 Vendor ID: 0000 00:29:48.038 Subsystem Vendor ID: 0000 00:29:48.038 Serial Number: 70592ca9d3d590f97439 00:29:48.038 Model Number: Linux 00:29:48.038 Firmware Version: 6.8.9-20 00:29:48.038 Recommended Arb Burst: 0 00:29:48.038 IEEE OUI Identifier: 00 00 00 00:29:48.038 Multi-path I/O 00:29:48.038 May have multiple subsystem ports: No 00:29:48.038 May have multiple controllers: No 00:29:48.038 Associated with SR-IOV VF: No 00:29:48.038 Max Data Transfer Size: Unlimited 00:29:48.038 Max Number of Namespaces: 0 00:29:48.038 Max Number of I/O Queues: 1024 00:29:48.038 NVMe Specification Version (VS): 1.3 00:29:48.038 NVMe Specification Version (Identify): 1.3 00:29:48.038 Maximum Queue Entries: 1024 00:29:48.038 Contiguous Queues Required: No 00:29:48.038 Arbitration Mechanisms Supported 00:29:48.038 Weighted Round Robin: Not Supported 00:29:48.038 Vendor Specific: Not Supported 00:29:48.038 Reset Timeout: 7500 ms 00:29:48.038 Doorbell Stride: 4 bytes 00:29:48.038 NVM Subsystem Reset: Not Supported 00:29:48.038 Command Sets Supported 00:29:48.038 NVM Command Set: Supported 00:29:48.038 Boot Partition: Not Supported 00:29:48.038 Memory Page Size Minimum: 4096 bytes 00:29:48.038 Memory Page Size Maximum: 4096 bytes 00:29:48.038 Persistent Memory Region: Not Supported 00:29:48.038 Optional Asynchronous Events Supported 00:29:48.038 Namespace Attribute Notices: Not Supported 00:29:48.038 Firmware Activation Notices: Not Supported 00:29:48.038 ANA Change Notices: Not Supported 00:29:48.038 PLE Aggregate Log Change Notices: Not Supported 00:29:48.038 LBA Status Info Alert Notices: Not Supported 00:29:48.038 EGE Aggregate Log Change Notices: Not Supported 00:29:48.038 Normal NVM Subsystem Shutdown event: Not Supported 00:29:48.038 Zone Descriptor Change Notices: Not Supported 00:29:48.038 Discovery Log Change Notices: Supported 00:29:48.038 Controller Attributes 00:29:48.038 128-bit Host Identifier: Not Supported 00:29:48.038 Non-Operational Permissive Mode: Not Supported 00:29:48.038 NVM Sets: Not Supported 00:29:48.038 Read Recovery Levels: Not Supported 00:29:48.038 Endurance Groups: Not Supported 00:29:48.038 Predictable Latency Mode: Not Supported 00:29:48.038 Traffic Based Keep ALive: Not Supported 00:29:48.038 Namespace Granularity: Not Supported 00:29:48.038 SQ Associations: Not Supported 00:29:48.038 UUID List: Not Supported 00:29:48.038 Multi-Domain Subsystem: Not Supported 00:29:48.038 Fixed Capacity Management: Not Supported 00:29:48.039 Variable Capacity Management: Not Supported 00:29:48.039 Delete Endurance Group: Not Supported 00:29:48.039 Delete NVM Set: Not Supported 00:29:48.039 Extended LBA Formats Supported: Not Supported 00:29:48.039 Flexible Data Placement Supported: Not Supported 00:29:48.039 00:29:48.039 Controller Memory Buffer Support 00:29:48.039 ================================ 00:29:48.039 Supported: No 00:29:48.039 00:29:48.039 Persistent Memory Region Support 00:29:48.039 ================================ 00:29:48.039 Supported: No 00:29:48.039 00:29:48.039 Admin Command Set Attributes 00:29:48.039 ============================ 00:29:48.039 Security Send/Receive: Not Supported 00:29:48.039 Format NVM: Not Supported 00:29:48.039 Firmware Activate/Download: Not Supported 00:29:48.039 Namespace Management: Not Supported 00:29:48.039 Device Self-Test: Not Supported 00:29:48.039 Directives: Not Supported 00:29:48.039 NVMe-MI: Not Supported 00:29:48.039 Virtualization Management: Not Supported 00:29:48.039 Doorbell Buffer Config: Not Supported 00:29:48.039 Get LBA Status Capability: Not Supported 00:29:48.039 Command & Feature Lockdown Capability: Not Supported 00:29:48.039 Abort Command Limit: 1 00:29:48.039 Async Event Request Limit: 1 00:29:48.039 Number of Firmware Slots: N/A 00:29:48.039 Firmware Slot 1 Read-Only: N/A 00:29:48.039 Firmware Activation Without Reset: N/A 00:29:48.039 Multiple Update Detection Support: N/A 00:29:48.039 Firmware Update Granularity: No Information Provided 00:29:48.039 Per-Namespace SMART Log: No 00:29:48.039 Asymmetric Namespace Access Log Page: Not Supported 00:29:48.039 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:48.039 Command Effects Log Page: Not Supported 00:29:48.039 Get Log Page Extended Data: Supported 00:29:48.039 Telemetry Log Pages: Not Supported 00:29:48.039 Persistent Event Log Pages: Not Supported 00:29:48.039 Supported Log Pages Log Page: May Support 00:29:48.039 Commands Supported & Effects Log Page: Not Supported 00:29:48.039 Feature Identifiers & Effects Log Page:May Support 00:29:48.039 NVMe-MI Commands & Effects Log Page: May Support 00:29:48.039 Data Area 4 for Telemetry Log: Not Supported 00:29:48.039 Error Log Page Entries Supported: 1 00:29:48.039 Keep Alive: Not Supported 00:29:48.039 00:29:48.039 NVM Command Set Attributes 00:29:48.039 ========================== 00:29:48.039 Submission Queue Entry Size 00:29:48.039 Max: 1 00:29:48.039 Min: 1 00:29:48.039 Completion Queue Entry Size 00:29:48.039 Max: 1 00:29:48.039 Min: 1 00:29:48.039 Number of Namespaces: 0 00:29:48.039 Compare Command: Not Supported 00:29:48.039 Write Uncorrectable Command: Not Supported 00:29:48.039 Dataset Management Command: Not Supported 00:29:48.039 Write Zeroes Command: Not Supported 00:29:48.039 Set Features Save Field: Not Supported 00:29:48.039 Reservations: Not Supported 00:29:48.039 Timestamp: Not Supported 00:29:48.039 Copy: Not Supported 00:29:48.039 Volatile Write Cache: Not Present 00:29:48.039 Atomic Write Unit (Normal): 1 00:29:48.039 Atomic Write Unit (PFail): 1 00:29:48.039 Atomic Compare & Write Unit: 1 00:29:48.039 Fused Compare & Write: Not Supported 00:29:48.039 Scatter-Gather List 00:29:48.039 SGL Command Set: Supported 00:29:48.039 SGL Keyed: Not Supported 00:29:48.039 SGL Bit Bucket Descriptor: Not Supported 00:29:48.039 SGL Metadata Pointer: Not Supported 00:29:48.039 Oversized SGL: Not Supported 00:29:48.039 SGL Metadata Address: Not Supported 00:29:48.039 SGL Offset: Supported 00:29:48.039 Transport SGL Data Block: Not Supported 00:29:48.039 Replay Protected Memory Block: Not Supported 00:29:48.039 00:29:48.039 Firmware Slot Information 00:29:48.039 ========================= 00:29:48.039 Active slot: 0 00:29:48.039 00:29:48.039 00:29:48.039 Error Log 00:29:48.039 ========= 00:29:48.039 00:29:48.039 Active Namespaces 00:29:48.039 ================= 00:29:48.039 Discovery Log Page 00:29:48.039 ================== 00:29:48.039 Generation Counter: 2 00:29:48.039 Number of Records: 2 00:29:48.039 Record Format: 0 00:29:48.039 00:29:48.039 Discovery Log Entry 0 00:29:48.039 ---------------------- 00:29:48.039 Transport Type: 3 (TCP) 00:29:48.039 Address Family: 1 (IPv4) 00:29:48.039 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:48.039 Entry Flags: 00:29:48.039 Duplicate Returned Information: 0 00:29:48.039 Explicit Persistent Connection Support for Discovery: 0 00:29:48.039 Transport Requirements: 00:29:48.039 Secure Channel: Not Specified 00:29:48.039 Port ID: 1 (0x0001) 00:29:48.039 Controller ID: 65535 (0xffff) 00:29:48.039 Admin Max SQ Size: 32 00:29:48.039 Transport Service Identifier: 4420 00:29:48.039 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:48.039 Transport Address: 10.0.0.1 00:29:48.039 Discovery Log Entry 1 00:29:48.039 ---------------------- 00:29:48.039 Transport Type: 3 (TCP) 00:29:48.039 Address Family: 1 (IPv4) 00:29:48.039 Subsystem Type: 2 (NVM Subsystem) 00:29:48.039 Entry Flags: 00:29:48.039 Duplicate Returned Information: 0 00:29:48.039 Explicit Persistent Connection Support for Discovery: 0 00:29:48.040 Transport Requirements: 00:29:48.040 Secure Channel: Not Specified 00:29:48.040 Port ID: 1 (0x0001) 00:29:48.040 Controller ID: 65535 (0xffff) 00:29:48.040 Admin Max SQ Size: 32 00:29:48.040 Transport Service Identifier: 4420 00:29:48.040 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:48.040 Transport Address: 10.0.0.1 00:29:48.040 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:48.040 get_feature(0x01) failed 00:29:48.040 get_feature(0x02) failed 00:29:48.040 get_feature(0x04) failed 00:29:48.040 ===================================================== 00:29:48.040 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:48.040 ===================================================== 00:29:48.040 Controller Capabilities/Features 00:29:48.040 ================================ 00:29:48.040 Vendor ID: 0000 00:29:48.040 Subsystem Vendor ID: 0000 00:29:48.040 Serial Number: 4a102c55821be863c29b 00:29:48.040 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:48.040 Firmware Version: 6.8.9-20 00:29:48.040 Recommended Arb Burst: 6 00:29:48.040 IEEE OUI Identifier: 00 00 00 00:29:48.040 Multi-path I/O 00:29:48.040 May have multiple subsystem ports: Yes 00:29:48.040 May have multiple controllers: Yes 00:29:48.040 Associated with SR-IOV VF: No 00:29:48.040 Max Data Transfer Size: Unlimited 00:29:48.040 Max Number of Namespaces: 1024 00:29:48.040 Max Number of I/O Queues: 128 00:29:48.040 NVMe Specification Version (VS): 1.3 00:29:48.040 NVMe Specification Version (Identify): 1.3 00:29:48.040 Maximum Queue Entries: 1024 00:29:48.040 Contiguous Queues Required: No 00:29:48.040 Arbitration Mechanisms Supported 00:29:48.040 Weighted Round Robin: Not Supported 00:29:48.040 Vendor Specific: Not Supported 00:29:48.040 Reset Timeout: 7500 ms 00:29:48.040 Doorbell Stride: 4 bytes 00:29:48.040 NVM Subsystem Reset: Not Supported 00:29:48.040 Command Sets Supported 00:29:48.040 NVM Command Set: Supported 00:29:48.040 Boot Partition: Not Supported 00:29:48.040 Memory Page Size Minimum: 4096 bytes 00:29:48.040 Memory Page Size Maximum: 4096 bytes 00:29:48.040 Persistent Memory Region: Not Supported 00:29:48.040 Optional Asynchronous Events Supported 00:29:48.040 Namespace Attribute Notices: Supported 00:29:48.040 Firmware Activation Notices: Not Supported 00:29:48.040 ANA Change Notices: Supported 00:29:48.040 PLE Aggregate Log Change Notices: Not Supported 00:29:48.040 LBA Status Info Alert Notices: Not Supported 00:29:48.040 EGE Aggregate Log Change Notices: Not Supported 00:29:48.040 Normal NVM Subsystem Shutdown event: Not Supported 00:29:48.040 Zone Descriptor Change Notices: Not Supported 00:29:48.040 Discovery Log Change Notices: Not Supported 00:29:48.040 Controller Attributes 00:29:48.040 128-bit Host Identifier: Supported 00:29:48.040 Non-Operational Permissive Mode: Not Supported 00:29:48.040 NVM Sets: Not Supported 00:29:48.040 Read Recovery Levels: Not Supported 00:29:48.040 Endurance Groups: Not Supported 00:29:48.040 Predictable Latency Mode: Not Supported 00:29:48.040 Traffic Based Keep ALive: Supported 00:29:48.040 Namespace Granularity: Not Supported 00:29:48.040 SQ Associations: Not Supported 00:29:48.040 UUID List: Not Supported 00:29:48.040 Multi-Domain Subsystem: Not Supported 00:29:48.040 Fixed Capacity Management: Not Supported 00:29:48.040 Variable Capacity Management: Not Supported 00:29:48.040 Delete Endurance Group: Not Supported 00:29:48.040 Delete NVM Set: Not Supported 00:29:48.040 Extended LBA Formats Supported: Not Supported 00:29:48.040 Flexible Data Placement Supported: Not Supported 00:29:48.040 00:29:48.040 Controller Memory Buffer Support 00:29:48.040 ================================ 00:29:48.040 Supported: No 00:29:48.040 00:29:48.040 Persistent Memory Region Support 00:29:48.040 ================================ 00:29:48.040 Supported: No 00:29:48.040 00:29:48.040 Admin Command Set Attributes 00:29:48.040 ============================ 00:29:48.040 Security Send/Receive: Not Supported 00:29:48.040 Format NVM: Not Supported 00:29:48.040 Firmware Activate/Download: Not Supported 00:29:48.040 Namespace Management: Not Supported 00:29:48.040 Device Self-Test: Not Supported 00:29:48.040 Directives: Not Supported 00:29:48.040 NVMe-MI: Not Supported 00:29:48.040 Virtualization Management: Not Supported 00:29:48.040 Doorbell Buffer Config: Not Supported 00:29:48.040 Get LBA Status Capability: Not Supported 00:29:48.040 Command & Feature Lockdown Capability: Not Supported 00:29:48.040 Abort Command Limit: 4 00:29:48.040 Async Event Request Limit: 4 00:29:48.040 Number of Firmware Slots: N/A 00:29:48.040 Firmware Slot 1 Read-Only: N/A 00:29:48.040 Firmware Activation Without Reset: N/A 00:29:48.040 Multiple Update Detection Support: N/A 00:29:48.040 Firmware Update Granularity: No Information Provided 00:29:48.040 Per-Namespace SMART Log: Yes 00:29:48.040 Asymmetric Namespace Access Log Page: Supported 00:29:48.040 ANA Transition Time : 10 sec 00:29:48.040 00:29:48.040 Asymmetric Namespace Access Capabilities 00:29:48.040 ANA Optimized State : Supported 00:29:48.040 ANA Non-Optimized State : Supported 00:29:48.040 ANA Inaccessible State : Supported 00:29:48.040 ANA Persistent Loss State : Supported 00:29:48.040 ANA Change State : Supported 00:29:48.040 ANAGRPID is not changed : No 00:29:48.040 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:48.040 00:29:48.040 ANA Group Identifier Maximum : 128 00:29:48.040 Number of ANA Group Identifiers : 128 00:29:48.040 Max Number of Allowed Namespaces : 1024 00:29:48.040 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:48.040 Command Effects Log Page: Supported 00:29:48.040 Get Log Page Extended Data: Supported 00:29:48.040 Telemetry Log Pages: Not Supported 00:29:48.040 Persistent Event Log Pages: Not Supported 00:29:48.040 Supported Log Pages Log Page: May Support 00:29:48.040 Commands Supported & Effects Log Page: Not Supported 00:29:48.040 Feature Identifiers & Effects Log Page:May Support 00:29:48.040 NVMe-MI Commands & Effects Log Page: May Support 00:29:48.040 Data Area 4 for Telemetry Log: Not Supported 00:29:48.040 Error Log Page Entries Supported: 128 00:29:48.040 Keep Alive: Supported 00:29:48.040 Keep Alive Granularity: 1000 ms 00:29:48.040 00:29:48.040 NVM Command Set Attributes 00:29:48.040 ========================== 00:29:48.040 Submission Queue Entry Size 00:29:48.041 Max: 64 00:29:48.041 Min: 64 00:29:48.041 Completion Queue Entry Size 00:29:48.041 Max: 16 00:29:48.041 Min: 16 00:29:48.041 Number of Namespaces: 1024 00:29:48.041 Compare Command: Not Supported 00:29:48.041 Write Uncorrectable Command: Not Supported 00:29:48.041 Dataset Management Command: Supported 00:29:48.041 Write Zeroes Command: Supported 00:29:48.041 Set Features Save Field: Not Supported 00:29:48.041 Reservations: Not Supported 00:29:48.041 Timestamp: Not Supported 00:29:48.041 Copy: Not Supported 00:29:48.041 Volatile Write Cache: Present 00:29:48.041 Atomic Write Unit (Normal): 1 00:29:48.041 Atomic Write Unit (PFail): 1 00:29:48.041 Atomic Compare & Write Unit: 1 00:29:48.041 Fused Compare & Write: Not Supported 00:29:48.041 Scatter-Gather List 00:29:48.041 SGL Command Set: Supported 00:29:48.041 SGL Keyed: Not Supported 00:29:48.041 SGL Bit Bucket Descriptor: Not Supported 00:29:48.041 SGL Metadata Pointer: Not Supported 00:29:48.041 Oversized SGL: Not Supported 00:29:48.041 SGL Metadata Address: Not Supported 00:29:48.041 SGL Offset: Supported 00:29:48.041 Transport SGL Data Block: Not Supported 00:29:48.041 Replay Protected Memory Block: Not Supported 00:29:48.041 00:29:48.041 Firmware Slot Information 00:29:48.041 ========================= 00:29:48.041 Active slot: 0 00:29:48.041 00:29:48.041 Asymmetric Namespace Access 00:29:48.041 =========================== 00:29:48.041 Change Count : 0 00:29:48.041 Number of ANA Group Descriptors : 1 00:29:48.041 ANA Group Descriptor : 0 00:29:48.041 ANA Group ID : 1 00:29:48.041 Number of NSID Values : 1 00:29:48.041 Change Count : 0 00:29:48.041 ANA State : 1 00:29:48.041 Namespace Identifier : 1 00:29:48.041 00:29:48.041 Commands Supported and Effects 00:29:48.041 ============================== 00:29:48.041 Admin Commands 00:29:48.041 -------------- 00:29:48.041 Get Log Page (02h): Supported 00:29:48.041 Identify (06h): Supported 00:29:48.041 Abort (08h): Supported 00:29:48.041 Set Features (09h): Supported 00:29:48.041 Get Features (0Ah): Supported 00:29:48.041 Asynchronous Event Request (0Ch): Supported 00:29:48.041 Keep Alive (18h): Supported 00:29:48.041 I/O Commands 00:29:48.041 ------------ 00:29:48.041 Flush (00h): Supported 00:29:48.041 Write (01h): Supported LBA-Change 00:29:48.041 Read (02h): Supported 00:29:48.041 Write Zeroes (08h): Supported LBA-Change 00:29:48.041 Dataset Management (09h): Supported 00:29:48.041 00:29:48.041 Error Log 00:29:48.041 ========= 00:29:48.041 Entry: 0 00:29:48.041 Error Count: 0x3 00:29:48.041 Submission Queue Id: 0x0 00:29:48.041 Command Id: 0x5 00:29:48.041 Phase Bit: 0 00:29:48.041 Status Code: 0x2 00:29:48.041 Status Code Type: 0x0 00:29:48.041 Do Not Retry: 1 00:29:48.041 Error Location: 0x28 00:29:48.041 LBA: 0x0 00:29:48.041 Namespace: 0x0 00:29:48.041 Vendor Log Page: 0x0 00:29:48.041 ----------- 00:29:48.041 Entry: 1 00:29:48.041 Error Count: 0x2 00:29:48.041 Submission Queue Id: 0x0 00:29:48.041 Command Id: 0x5 00:29:48.041 Phase Bit: 0 00:29:48.041 Status Code: 0x2 00:29:48.041 Status Code Type: 0x0 00:29:48.041 Do Not Retry: 1 00:29:48.041 Error Location: 0x28 00:29:48.041 LBA: 0x0 00:29:48.041 Namespace: 0x0 00:29:48.041 Vendor Log Page: 0x0 00:29:48.041 ----------- 00:29:48.041 Entry: 2 00:29:48.041 Error Count: 0x1 00:29:48.041 Submission Queue Id: 0x0 00:29:48.041 Command Id: 0x4 00:29:48.041 Phase Bit: 0 00:29:48.041 Status Code: 0x2 00:29:48.041 Status Code Type: 0x0 00:29:48.041 Do Not Retry: 1 00:29:48.041 Error Location: 0x28 00:29:48.041 LBA: 0x0 00:29:48.041 Namespace: 0x0 00:29:48.041 Vendor Log Page: 0x0 00:29:48.041 00:29:48.041 Number of Queues 00:29:48.041 ================ 00:29:48.041 Number of I/O Submission Queues: 128 00:29:48.041 Number of I/O Completion Queues: 128 00:29:48.041 00:29:48.041 ZNS Specific Controller Data 00:29:48.041 ============================ 00:29:48.041 Zone Append Size Limit: 0 00:29:48.041 00:29:48.041 00:29:48.041 Active Namespaces 00:29:48.041 ================= 00:29:48.041 get_feature(0x05) failed 00:29:48.041 Namespace ID:1 00:29:48.041 Command Set Identifier: NVM (00h) 00:29:48.041 Deallocate: Supported 00:29:48.041 Deallocated/Unwritten Error: Not Supported 00:29:48.041 Deallocated Read Value: Unknown 00:29:48.041 Deallocate in Write Zeroes: Not Supported 00:29:48.041 Deallocated Guard Field: 0xFFFF 00:29:48.041 Flush: Supported 00:29:48.041 Reservation: Not Supported 00:29:48.041 Namespace Sharing Capabilities: Multiple Controllers 00:29:48.041 Size (in LBAs): 1953525168 (931GiB) 00:29:48.041 Capacity (in LBAs): 1953525168 (931GiB) 00:29:48.041 Utilization (in LBAs): 1953525168 (931GiB) 00:29:48.041 UUID: 8beb90c7-c011-4716-b1b3-72a75653de7f 00:29:48.041 Thin Provisioning: Not Supported 00:29:48.041 Per-NS Atomic Units: Yes 00:29:48.041 Atomic Boundary Size (Normal): 0 00:29:48.041 Atomic Boundary Size (PFail): 0 00:29:48.041 Atomic Boundary Offset: 0 00:29:48.041 NGUID/EUI64 Never Reused: No 00:29:48.041 ANA group ID: 1 00:29:48.041 Namespace Write Protected: No 00:29:48.041 Number of LBA Formats: 1 00:29:48.041 Current LBA Format: LBA Format #00 00:29:48.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:48.041 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.041 rmmod nvme_tcp 00:29:48.041 rmmod nvme_fabrics 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.041 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.300 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.300 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.300 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.300 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.300 12:36:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:50.206 12:36:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:52.739 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:52.740 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:52.740 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:52.740 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:52.740 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:52.999 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:53.936 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:29:53.936 00:29:53.936 real 0m15.557s 00:29:53.936 user 0m3.786s 00:29:53.936 sys 0m7.853s 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.936 ************************************ 00:29:53.936 END TEST nvmf_identify_kernel_target 00:29:53.936 ************************************ 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.936 ************************************ 00:29:53.936 START TEST nvmf_auth_host 00:29:53.936 ************************************ 00:29:53.936 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:54.196 * Looking for test storage... 00:29:54.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:54.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.196 --rc genhtml_branch_coverage=1 00:29:54.196 --rc genhtml_function_coverage=1 00:29:54.196 --rc genhtml_legend=1 00:29:54.196 --rc geninfo_all_blocks=1 00:29:54.196 --rc geninfo_unexecuted_blocks=1 00:29:54.196 00:29:54.196 ' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:54.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.196 --rc genhtml_branch_coverage=1 00:29:54.196 --rc genhtml_function_coverage=1 00:29:54.196 --rc genhtml_legend=1 00:29:54.196 --rc geninfo_all_blocks=1 00:29:54.196 --rc geninfo_unexecuted_blocks=1 00:29:54.196 00:29:54.196 ' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:54.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.196 --rc genhtml_branch_coverage=1 00:29:54.196 --rc genhtml_function_coverage=1 00:29:54.196 --rc genhtml_legend=1 00:29:54.196 --rc geninfo_all_blocks=1 00:29:54.196 --rc geninfo_unexecuted_blocks=1 00:29:54.196 00:29:54.196 ' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:54.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.196 --rc genhtml_branch_coverage=1 00:29:54.196 --rc genhtml_function_coverage=1 00:29:54.196 --rc genhtml_legend=1 00:29:54.196 --rc geninfo_all_blocks=1 00:29:54.196 --rc geninfo_unexecuted_blocks=1 00:29:54.196 00:29:54.196 ' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.196 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:54.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.197 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.466 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.466 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.466 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.466 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:29:59.467 00:29:59.467 --- 10.0.0.2 ping statistics --- 00:29:59.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.467 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:29:59.467 00:29:59.467 --- 10.0.0.1 ping statistics --- 00:29:59.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.467 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=327122 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 327122 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 327122 ']' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.467 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da0235761703bd4044bec86aa04f3b2b 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.u6n 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da0235761703bd4044bec86aa04f3b2b 0 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da0235761703bd4044bec86aa04f3b2b 0 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da0235761703bd4044bec86aa04f3b2b 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:59.727 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.u6n 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.u6n 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.u6n 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e6d93749c2e09451e016a6bc400f5a1524e4dd926c2950d7a0bd45452fe4f50a 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1ar 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e6d93749c2e09451e016a6bc400f5a1524e4dd926c2950d7a0bd45452fe4f50a 3 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e6d93749c2e09451e016a6bc400f5a1524e4dd926c2950d7a0bd45452fe4f50a 3 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e6d93749c2e09451e016a6bc400f5a1524e4dd926c2950d7a0bd45452fe4f50a 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1ar 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1ar 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1ar 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:59.986 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=935fb776294656976ef31a0a5eb1a2303bb374e9aac703d1 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Iva 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 935fb776294656976ef31a0a5eb1a2303bb374e9aac703d1 0 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 935fb776294656976ef31a0a5eb1a2303bb374e9aac703d1 0 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=935fb776294656976ef31a0a5eb1a2303bb374e9aac703d1 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Iva 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Iva 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Iva 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56cd10a40159708f7fe3058bd7bc77ab64f7248a32fa65bf 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5yX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56cd10a40159708f7fe3058bd7bc77ab64f7248a32fa65bf 2 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56cd10a40159708f7fe3058bd7bc77ab64f7248a32fa65bf 2 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56cd10a40159708f7fe3058bd7bc77ab64f7248a32fa65bf 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5yX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5yX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.5yX 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:59.987 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc98012a62dd1caec1b6994424f56983 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IcK 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc98012a62dd1caec1b6994424f56983 1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc98012a62dd1caec1b6994424f56983 1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc98012a62dd1caec1b6994424f56983 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IcK 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IcK 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.IcK 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2501b05a8d9506591c59aef55aaf06c5 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nxe 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2501b05a8d9506591c59aef55aaf06c5 1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2501b05a8d9506591c59aef55aaf06c5 1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2501b05a8d9506591c59aef55aaf06c5 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nxe 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nxe 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Nxe 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8622b2bbdf53c64af3a4633f224172ff23397ec8c004e376 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YsM 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8622b2bbdf53c64af3a4633f224172ff23397ec8c004e376 2 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8622b2bbdf53c64af3a4633f224172ff23397ec8c004e376 2 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8622b2bbdf53c64af3a4633f224172ff23397ec8c004e376 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YsM 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YsM 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YsM 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ef23e1d69020c605bc0be9c1e5a255c 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j6J 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ef23e1d69020c605bc0be9c1e5a255c 0 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ef23e1d69020c605bc0be9c1e5a255c 0 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ef23e1d69020c605bc0be9c1e5a255c 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:00.247 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j6J 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j6J 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.j6J 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0372c935a0759a780c3e6660281fdd158f45af7d1dfd1963e5618af3ab227164 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5PD 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0372c935a0759a780c3e6660281fdd158f45af7d1dfd1963e5618af3ab227164 3 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0372c935a0759a780c3e6660281fdd158f45af7d1dfd1963e5618af3ab227164 3 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0372c935a0759a780c3e6660281fdd158f45af7d1dfd1963e5618af3ab227164 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5PD 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5PD 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5PD 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 327122 00:30:00.507 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 327122 ']' 00:30:00.508 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.508 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:00.508 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.508 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:00.508 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u6n 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1ar ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1ar 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Iva 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.5yX ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5yX 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IcK 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Nxe ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nxe 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YsM 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.j6J ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.j6J 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5PD 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:00.767 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:00.768 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:03.323 Waiting for block devices as requested 00:30:03.323 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:30:03.597 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:03.597 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:03.597 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:03.597 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:03.873 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:03.873 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:03.873 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:04.181 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:04.181 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:04.181 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:04.181 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:04.181 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:04.460 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:04.460 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:04.460 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:04.460 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:05.071 No valid GPT data, bailing 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:05.071 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:05.339 00:30:05.339 Discovery Log Number of Records 2, Generation counter 2 00:30:05.339 =====Discovery Log Entry 0====== 00:30:05.339 trtype: tcp 00:30:05.339 adrfam: ipv4 00:30:05.339 subtype: current discovery subsystem 00:30:05.339 treq: not specified, sq flow control disable supported 00:30:05.339 portid: 1 00:30:05.339 trsvcid: 4420 00:30:05.339 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:05.339 traddr: 10.0.0.1 00:30:05.339 eflags: none 00:30:05.339 sectype: none 00:30:05.339 =====Discovery Log Entry 1====== 00:30:05.339 trtype: tcp 00:30:05.339 adrfam: ipv4 00:30:05.339 subtype: nvme subsystem 00:30:05.339 treq: not specified, sq flow control disable supported 00:30:05.339 portid: 1 00:30:05.339 trsvcid: 4420 00:30:05.339 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:05.339 traddr: 10.0.0.1 00:30:05.339 eflags: none 00:30:05.339 sectype: none 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.339 nvme0n1 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.339 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.597 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.597 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.597 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.598 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.598 nvme0n1 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.598 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.856 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.857 nvme0n1 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.857 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.115 nvme0n1 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.116 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.374 nvme0n1 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.374 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.375 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.375 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.375 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:06.375 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.375 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.633 nvme0n1 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.633 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.891 nvme0n1 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:06.891 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.892 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.150 nvme0n1 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.150 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.408 nvme0n1 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.408 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.409 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.667 nvme0n1 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.667 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.925 nvme0n1 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.926 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.186 nvme0n1 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.186 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.445 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.702 nvme0n1 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:08.702 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.703 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.961 nvme0n1 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.961 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.219 nvme0n1 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.219 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.476 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.477 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.735 nvme0n1 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.735 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.301 nvme0n1 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.301 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.302 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.868 nvme0n1 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.868 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.434 nvme0n1 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.434 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:11.435 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.435 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.001 nvme0n1 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.001 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.259 nvme0n1 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.259 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.517 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.082 nvme0n1 00:30:13.082 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.082 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.082 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.082 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.082 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:13.339 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.340 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.291 nvme0n1 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:14.291 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.292 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.855 nvme0n1 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.855 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.856 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.856 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.113 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.678 nvme0n1 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.678 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.936 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.501 nvme0n1 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.501 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.759 nvme0n1 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.759 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 nvme0n1 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.016 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.017 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.274 nvme0n1 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.274 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:17.275 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.275 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.532 nvme0n1 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.532 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.789 nvme0n1 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.790 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.048 nvme0n1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.048 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.305 nvme0n1 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.305 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.563 nvme0n1 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.563 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 nvme0n1 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:18.821 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.822 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.080 nvme0n1 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.080 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.337 nvme0n1 00:30:19.337 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.337 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.337 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.337 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.338 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.595 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.853 nvme0n1 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.853 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 nvme0n1 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.111 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.368 nvme0n1 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.368 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.626 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.885 nvme0n1 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.885 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.450 nvme0n1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.450 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.014 nvme0n1 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.014 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.015 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.579 nvme0n1 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.579 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.579 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.580 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.144 nvme0n1 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:23.144 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.145 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.710 nvme0n1 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.710 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.642 nvme0n1 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:24.642 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.643 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.207 nvme0n1 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.207 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:25.464 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.465 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.029 nvme0n1 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.029 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.287 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.852 nvme0n1 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.852 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.109 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.041 nvme0n1 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.041 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 nvme0n1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.042 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 nvme0n1 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:28.299 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.300 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.557 nvme0n1 00:30:28.557 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.557 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.557 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.557 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.558 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.558 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.815 nvme0n1 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.815 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.816 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.073 nvme0n1 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.073 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.074 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.331 nvme0n1 00:30:29.331 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.331 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.331 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.331 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.331 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.332 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.589 nvme0n1 00:30:29.589 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.589 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.589 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.589 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.589 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:29.589 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.590 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 nvme0n1 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.847 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.105 nvme0n1 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.105 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.106 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.363 nvme0n1 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.363 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.621 nvme0n1 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:30.621 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.622 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.879 nvme0n1 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.879 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.136 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.137 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.394 nvme0n1 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.394 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.395 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.652 nvme0n1 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.652 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.217 nvme0n1 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:32.217 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.218 12:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.475 nvme0n1 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.475 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.733 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.298 nvme0n1 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.298 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.556 nvme0n1 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.556 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.813 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.379 nvme0n1 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:34.379 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.380 12:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.637 nvme0n1 00:30:34.637 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:34.894 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGEwMjM1NzYxNzAzYmQ0MDQ0YmVjODZhYTA0ZjNiMmJ+vzVr: 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: ]] 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkOTM3NDljMmUwOTQ1MWUwMTZhNmJjNDAwZjVhMTUyNGU0ZGQ5MjZjMjk1MGQ3YTBiZDQ1NDUyZmU0ZjUwYRvHaXI=: 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.895 12:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.828 nvme0n1 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:35.828 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.829 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.394 nvme0n1 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.394 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.651 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.652 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.217 nvme0n1 00:30:37.217 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.217 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.217 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:37.217 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.217 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODYyMmIyYmJkZjUzYzY0YWYzYTQ2MzNmMjI0MTcyZmYyMzM5N2VjOGMwMDRlMzc28bBPsw==: 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: ]] 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmMjNlMWQ2OTAyMGM2MDViYzBiZTljMWU1YTI1NWO/pODD: 00:30:37.475 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.476 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.407 nvme0n1 00:30:38.407 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.407 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.407 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:38.407 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.407 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM3MmM5MzVhMDc1OWE3ODBjM2U2NjYwMjgxZmRkMTU4ZjQ1YWY3ZDFkZmQxOTYzZTU2MThhZjNhYjIyNzE2NKrlrU8=: 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.408 12:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.972 nvme0n1 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.972 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:39.230 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.231 request: 00:30:39.231 { 00:30:39.231 "name": "nvme0", 00:30:39.231 "trtype": "tcp", 00:30:39.231 "traddr": "10.0.0.1", 00:30:39.231 "adrfam": "ipv4", 00:30:39.231 "trsvcid": "4420", 00:30:39.231 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:39.231 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:39.231 "prchk_reftag": false, 00:30:39.231 "prchk_guard": false, 00:30:39.231 "hdgst": false, 00:30:39.231 "ddgst": false, 00:30:39.231 "allow_unrecognized_csi": false, 00:30:39.231 "method": "bdev_nvme_attach_controller", 00:30:39.231 "req_id": 1 00:30:39.231 } 00:30:39.231 Got JSON-RPC error response 00:30:39.231 response: 00:30:39.231 { 00:30:39.231 "code": -5, 00:30:39.231 "message": "Input/output error" 00:30:39.231 } 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.231 request: 00:30:39.231 { 00:30:39.231 "name": "nvme0", 00:30:39.231 "trtype": "tcp", 00:30:39.231 "traddr": "10.0.0.1", 00:30:39.231 "adrfam": "ipv4", 00:30:39.231 "trsvcid": "4420", 00:30:39.231 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:39.231 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:39.231 "prchk_reftag": false, 00:30:39.231 "prchk_guard": false, 00:30:39.231 "hdgst": false, 00:30:39.231 "ddgst": false, 00:30:39.231 "dhchap_key": "key2", 00:30:39.231 "allow_unrecognized_csi": false, 00:30:39.231 "method": "bdev_nvme_attach_controller", 00:30:39.231 "req_id": 1 00:30:39.231 } 00:30:39.231 Got JSON-RPC error response 00:30:39.231 response: 00:30:39.231 { 00:30:39.231 "code": -5, 00:30:39.231 "message": "Input/output error" 00:30:39.231 } 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.231 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.489 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:39.489 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:39.489 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.489 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.490 request: 00:30:39.490 { 00:30:39.490 "name": "nvme0", 00:30:39.490 "trtype": "tcp", 00:30:39.490 "traddr": "10.0.0.1", 00:30:39.490 "adrfam": "ipv4", 00:30:39.490 "trsvcid": "4420", 00:30:39.490 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:39.490 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:39.490 "prchk_reftag": false, 00:30:39.490 "prchk_guard": false, 00:30:39.490 "hdgst": false, 00:30:39.490 "ddgst": false, 00:30:39.490 "dhchap_key": "key1", 00:30:39.490 "dhchap_ctrlr_key": "ckey2", 00:30:39.490 "allow_unrecognized_csi": false, 00:30:39.490 "method": "bdev_nvme_attach_controller", 00:30:39.490 "req_id": 1 00:30:39.490 } 00:30:39.490 Got JSON-RPC error response 00:30:39.490 response: 00:30:39.490 { 00:30:39.490 "code": -5, 00:30:39.490 "message": "Input/output error" 00:30:39.490 } 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.490 12:37:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.490 nvme0n1 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.490 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.748 request: 00:30:39.748 { 00:30:39.748 "name": "nvme0", 00:30:39.748 "dhchap_key": "key1", 00:30:39.748 "dhchap_ctrlr_key": "ckey2", 00:30:39.748 "method": "bdev_nvme_set_keys", 00:30:39.748 "req_id": 1 00:30:39.748 } 00:30:39.748 Got JSON-RPC error response 00:30:39.748 response: 00:30:39.748 { 00:30:39.748 "code": -13, 00:30:39.748 "message": "Permission denied" 00:30:39.748 } 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:39.748 12:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:40.710 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:40.710 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:40.710 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.710 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.710 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.967 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:40.967 12:37:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM1ZmI3NzYyOTQ2NTY5NzZlZjMxYTBhNWViMWEyMzAzYmIzNzRlOWFhYzcwM2QxS2Nm2g==: 00:30:41.900 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: ]] 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjZDEwYTQwMTU5NzA4ZjdmZTMwNThiZDdiYzc3YWI2NGY3MjQ4YTMyZmE2NWJmq4HrHg==: 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.901 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.159 nvme0n1 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM5ODAxMmE2MmRkMWNhZWMxYjY5OTQ0MjRmNTY5ODMzIr0l: 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: ]] 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUwMWIwNWE4ZDk1MDY1OTFjNTlhZWY1NWFhZjA2YzV9cKjn: 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.159 request: 00:30:42.159 { 00:30:42.159 "name": "nvme0", 00:30:42.159 "dhchap_key": "key2", 00:30:42.159 "dhchap_ctrlr_key": "ckey1", 00:30:42.159 "method": "bdev_nvme_set_keys", 00:30:42.159 "req_id": 1 00:30:42.159 } 00:30:42.159 Got JSON-RPC error response 00:30:42.159 response: 00:30:42.159 { 00:30:42.159 "code": -13, 00:30:42.159 "message": "Permission denied" 00:30:42.159 } 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:42.159 12:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:43.091 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:43.091 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:43.091 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.091 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.091 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.349 rmmod nvme_tcp 00:30:43.349 rmmod nvme_fabrics 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 327122 ']' 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 327122 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 327122 ']' 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 327122 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 327122 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 327122' 00:30:43.349 killing process with pid 327122 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 327122 00:30:43.349 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 327122 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.607 12:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.607 12:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.607 12:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.607 12:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.607 12:37:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:45.509 12:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:48.038 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:48.038 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:48.038 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:48.296 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:49.230 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:30:49.230 12:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.u6n /tmp/spdk.key-null.Iva /tmp/spdk.key-sha256.IcK /tmp/spdk.key-sha384.YsM /tmp/spdk.key-sha512.5PD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:49.230 12:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:52.560 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:52.560 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:52.560 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:52.560 00:30:52.560 real 0m58.066s 00:30:52.560 user 0m52.896s 00:30:52.560 sys 0m11.648s 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.560 ************************************ 00:30:52.560 END TEST nvmf_auth_host 00:30:52.560 ************************************ 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.560 ************************************ 00:30:52.560 START TEST nvmf_digest 00:30:52.560 ************************************ 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:52.560 * Looking for test storage... 00:30:52.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.560 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.561 --rc genhtml_branch_coverage=1 00:30:52.561 --rc genhtml_function_coverage=1 00:30:52.561 --rc genhtml_legend=1 00:30:52.561 --rc geninfo_all_blocks=1 00:30:52.561 --rc geninfo_unexecuted_blocks=1 00:30:52.561 00:30:52.561 ' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.561 --rc genhtml_branch_coverage=1 00:30:52.561 --rc genhtml_function_coverage=1 00:30:52.561 --rc genhtml_legend=1 00:30:52.561 --rc geninfo_all_blocks=1 00:30:52.561 --rc geninfo_unexecuted_blocks=1 00:30:52.561 00:30:52.561 ' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.561 --rc genhtml_branch_coverage=1 00:30:52.561 --rc genhtml_function_coverage=1 00:30:52.561 --rc genhtml_legend=1 00:30:52.561 --rc geninfo_all_blocks=1 00:30:52.561 --rc geninfo_unexecuted_blocks=1 00:30:52.561 00:30:52.561 ' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:52.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.561 --rc genhtml_branch_coverage=1 00:30:52.561 --rc genhtml_function_coverage=1 00:30:52.561 --rc genhtml_legend=1 00:30:52.561 --rc geninfo_all_blocks=1 00:30:52.561 --rc geninfo_unexecuted_blocks=1 00:30:52.561 00:30:52.561 ' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:52.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.561 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.000 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:58.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:58.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:58.001 Found net devices under 0000:af:00.0: cvl_0_0 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:58.001 Found net devices under 0000:af:00.1: cvl_0_1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:30:58.001 00:30:58.001 --- 10.0.0.2 ping statistics --- 00:30:58.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.001 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:30:58.001 00:30:58.001 --- 10.0.0.1 ping statistics --- 00:30:58.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.001 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:58.001 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.259 ************************************ 00:30:58.259 START TEST nvmf_digest_clean 00:30:58.259 ************************************ 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=342623 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 342623 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 342623 ']' 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.259 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.259 [2024-11-06 12:37:29.725616] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:30:58.259 [2024-11-06 12:37:29.725669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.259 [2024-11-06 12:37:29.826070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.259 [2024-11-06 12:37:29.872926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.259 [2024-11-06 12:37:29.872969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.259 [2024-11-06 12:37:29.872979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.259 [2024-11-06 12:37:29.872988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.259 [2024-11-06 12:37:29.872996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.259 [2024-11-06 12:37:29.873811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.517 12:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.517 null0 00:30:58.517 [2024-11-06 12:37:30.080932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.517 [2024-11-06 12:37:30.105161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=342644 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 342644 /var/tmp/bperf.sock 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 342644 ']' 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.517 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.775 [2024-11-06 12:37:30.163025] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:30:58.775 [2024-11-06 12:37:30.163082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342644 ] 00:30:58.775 [2024-11-06 12:37:30.229138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.775 [2024-11-06 12:37:30.267361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.033 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:59.033 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:30:59.033 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:59.033 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:59.033 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:59.290 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.290 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.854 nvme0n1 00:30:59.854 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:59.854 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.854 Running I/O for 2 seconds... 00:31:01.719 17508.00 IOPS, 68.39 MiB/s [2024-11-06T11:37:33.334Z] 17634.00 IOPS, 68.88 MiB/s 00:31:01.719 Latency(us) 00:31:01.719 [2024-11-06T11:37:33.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:01.719 nvme0n1 : 2.00 17670.07 69.02 0.00 0.00 7239.15 1906.50 21090.68 00:31:01.719 [2024-11-06T11:37:33.334Z] =================================================================================================================== 00:31:01.719 [2024-11-06T11:37:33.334Z] Total : 17670.07 69.02 0.00 0.00 7239.15 1906.50 21090.68 00:31:01.977 { 00:31:01.977 "results": [ 00:31:01.977 { 00:31:01.977 "job": "nvme0n1", 00:31:01.977 "core_mask": "0x2", 00:31:01.977 "workload": "randread", 00:31:01.977 "status": "finished", 00:31:01.977 "queue_depth": 128, 00:31:01.977 "io_size": 4096, 00:31:01.977 "runtime": 2.003161, 00:31:01.977 "iops": 17670.072450491996, 00:31:01.977 "mibps": 69.02372050973436, 00:31:01.977 "io_failed": 0, 00:31:01.977 "io_timeout": 0, 00:31:01.977 "avg_latency_us": 7239.149269254872, 00:31:01.977 "min_latency_us": 1906.5018181818182, 00:31:01.977 "max_latency_us": 21090.676363636365 00:31:01.977 } 00:31:01.977 ], 00:31:01.977 "core_count": 1 00:31:01.977 } 00:31:01.977 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:01.977 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:01.977 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:01.977 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:01.977 | select(.opcode=="crc32c") 00:31:01.977 | "\(.module_name) \(.executed)"' 00:31:01.977 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 342644 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 342644 ']' 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 342644 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 342644 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 342644' 00:31:02.236 killing process with pid 342644 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 342644 00:31:02.236 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.236 00:31:02.236 Latency(us) 00:31:02.236 [2024-11-06T11:37:33.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.236 [2024-11-06T11:37:33.851Z] =================================================================================================================== 00:31:02.236 [2024-11-06T11:37:33.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 342644 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=343432 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 343432 /var/tmp/bperf.sock 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 343432 ']' 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.236 12:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:02.494 [2024-11-06 12:37:33.876317] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:02.494 [2024-11-06 12:37:33.876376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343432 ] 00:31:02.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.494 Zero copy mechanism will not be used. 00:31:02.494 [2024-11-06 12:37:33.942209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.494 [2024-11-06 12:37:33.982371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.494 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.494 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:02.494 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:02.494 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:02.494 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:03.059 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.059 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.317 nvme0n1 00:31:03.317 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:03.317 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.574 Zero copy mechanism will not be used. 00:31:03.574 Running I/O for 2 seconds... 00:31:05.440 4788.00 IOPS, 598.50 MiB/s [2024-11-06T11:37:37.055Z] 4786.00 IOPS, 598.25 MiB/s 00:31:05.440 Latency(us) 00:31:05.440 [2024-11-06T11:37:37.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.440 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:05.440 nvme0n1 : 2.00 4789.39 598.67 0.00 0.00 3338.36 1213.91 8757.99 00:31:05.440 [2024-11-06T11:37:37.055Z] =================================================================================================================== 00:31:05.440 [2024-11-06T11:37:37.055Z] Total : 4789.39 598.67 0.00 0.00 3338.36 1213.91 8757.99 00:31:05.440 { 00:31:05.440 "results": [ 00:31:05.440 { 00:31:05.440 "job": "nvme0n1", 00:31:05.440 "core_mask": "0x2", 00:31:05.440 "workload": "randread", 00:31:05.440 "status": "finished", 00:31:05.440 "queue_depth": 16, 00:31:05.440 "io_size": 131072, 00:31:05.440 "runtime": 2.001926, 00:31:05.440 "iops": 4789.387819529793, 00:31:05.440 "mibps": 598.6734774412241, 00:31:05.440 "io_failed": 0, 00:31:05.440 "io_timeout": 0, 00:31:05.440 "avg_latency_us": 3338.3642765578184, 00:31:05.440 "min_latency_us": 1213.9054545454546, 00:31:05.440 "max_latency_us": 8757.992727272727 00:31:05.440 } 00:31:05.440 ], 00:31:05.440 "core_count": 1 00:31:05.440 } 00:31:05.440 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:05.440 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:05.440 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:05.440 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:05.440 | select(.opcode=="crc32c") 00:31:05.440 | "\(.module_name) \(.executed)"' 00:31:05.440 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 343432 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 343432 ']' 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 343432 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:05.697 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 343432 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 343432' 00:31:05.955 killing process with pid 343432 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 343432 00:31:05.955 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.955 00:31:05.955 Latency(us) 00:31:05.955 [2024-11-06T11:37:37.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.955 [2024-11-06T11:37:37.570Z] =================================================================================================================== 00:31:05.955 [2024-11-06T11:37:37.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 343432 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=343970 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 343970 /var/tmp/bperf.sock 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 343970 ']' 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:05.955 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:05.955 [2024-11-06 12:37:37.568682] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:05.955 [2024-11-06 12:37:37.568740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343970 ] 00:31:06.213 [2024-11-06 12:37:37.634018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.213 [2024-11-06 12:37:37.674266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.213 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:06.213 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:06.213 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:06.213 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:06.213 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:06.779 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.779 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:07.036 nvme0n1 00:31:07.036 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:07.036 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.294 Running I/O for 2 seconds... 00:31:09.158 17565.00 IOPS, 68.61 MiB/s [2024-11-06T11:37:41.031Z] 17634.50 IOPS, 68.88 MiB/s 00:31:09.416 Latency(us) 00:31:09.416 [2024-11-06T11:37:41.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.416 nvme0n1 : 2.01 17636.55 68.89 0.00 0.00 7243.33 6225.92 15132.86 00:31:09.416 [2024-11-06T11:37:41.031Z] =================================================================================================================== 00:31:09.416 [2024-11-06T11:37:41.031Z] Total : 17636.55 68.89 0.00 0.00 7243.33 6225.92 15132.86 00:31:09.416 { 00:31:09.416 "results": [ 00:31:09.416 { 00:31:09.416 "job": "nvme0n1", 00:31:09.416 "core_mask": "0x2", 00:31:09.416 "workload": "randwrite", 00:31:09.416 "status": "finished", 00:31:09.416 "queue_depth": 128, 00:31:09.416 "io_size": 4096, 00:31:09.416 "runtime": 2.008386, 00:31:09.416 "iops": 17636.549946076102, 00:31:09.416 "mibps": 68.89277322685977, 00:31:09.416 "io_failed": 0, 00:31:09.416 "io_timeout": 0, 00:31:09.416 "avg_latency_us": 7243.328761007208, 00:31:09.416 "min_latency_us": 6225.92, 00:31:09.416 "max_latency_us": 15132.858181818181 00:31:09.416 } 00:31:09.416 ], 00:31:09.416 "core_count": 1 00:31:09.416 } 00:31:09.416 12:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:09.416 12:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:09.416 12:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:09.416 12:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:09.416 | select(.opcode=="crc32c") 00:31:09.416 | "\(.module_name) \(.executed)"' 00:31:09.416 12:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 343970 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 343970 ']' 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 343970 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 343970 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 343970' 00:31:09.674 killing process with pid 343970 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 343970 00:31:09.674 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.674 00:31:09.674 Latency(us) 00:31:09.674 [2024-11-06T11:37:41.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.674 [2024-11-06T11:37:41.289Z] =================================================================================================================== 00:31:09.674 [2024-11-06T11:37:41.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.674 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 343970 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=344694 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 344694 /var/tmp/bperf.sock 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 344694 ']' 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:09.933 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:09.933 [2024-11-06 12:37:41.351104] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:09.933 [2024-11-06 12:37:41.351165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344694 ] 00:31:09.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:09.933 Zero copy mechanism will not be used. 00:31:09.933 [2024-11-06 12:37:41.417648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.933 [2024-11-06 12:37:41.458256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.191 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:10.191 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:31:10.191 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:10.191 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:10.191 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:10.448 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.448 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:11.013 nvme0n1 00:31:11.013 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:11.013 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:11.013 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:11.013 Zero copy mechanism will not be used. 00:31:11.013 Running I/O for 2 seconds... 00:31:13.321 5372.00 IOPS, 671.50 MiB/s [2024-11-06T11:37:44.936Z] 5099.50 IOPS, 637.44 MiB/s 00:31:13.321 Latency(us) 00:31:13.321 [2024-11-06T11:37:44.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.321 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:13.321 nvme0n1 : 2.00 5097.43 637.18 0.00 0.00 3133.65 2606.55 7745.16 00:31:13.321 [2024-11-06T11:37:44.936Z] =================================================================================================================== 00:31:13.321 [2024-11-06T11:37:44.936Z] Total : 5097.43 637.18 0.00 0.00 3133.65 2606.55 7745.16 00:31:13.321 { 00:31:13.321 "results": [ 00:31:13.321 { 00:31:13.321 "job": "nvme0n1", 00:31:13.321 "core_mask": "0x2", 00:31:13.321 "workload": "randwrite", 00:31:13.321 "status": "finished", 00:31:13.321 "queue_depth": 16, 00:31:13.321 "io_size": 131072, 00:31:13.321 "runtime": 2.004933, 00:31:13.321 "iops": 5097.427195821506, 00:31:13.321 "mibps": 637.1783994776882, 00:31:13.321 "io_failed": 0, 00:31:13.321 "io_timeout": 0, 00:31:13.321 "avg_latency_us": 3133.651321828856, 00:31:13.321 "min_latency_us": 2606.5454545454545, 00:31:13.321 "max_latency_us": 7745.163636363636 00:31:13.321 } 00:31:13.321 ], 00:31:13.321 "core_count": 1 00:31:13.321 } 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:13.321 | select(.opcode=="crc32c") 00:31:13.321 | "\(.module_name) \(.executed)"' 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 344694 00:31:13.321 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 344694 ']' 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 344694 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 344694 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 344694' 00:31:13.322 killing process with pid 344694 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 344694 00:31:13.322 Received shutdown signal, test time was about 2.000000 seconds 00:31:13.322 00:31:13.322 Latency(us) 00:31:13.322 [2024-11-06T11:37:44.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.322 [2024-11-06T11:37:44.937Z] =================================================================================================================== 00:31:13.322 [2024-11-06T11:37:44.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.322 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 344694 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 342623 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 342623 ']' 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 342623 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 342623 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 342623' 00:31:13.580 killing process with pid 342623 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 342623 00:31:13.580 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 342623 00:31:13.838 00:31:13.839 real 0m15.616s 00:31:13.839 user 0m31.647s 00:31:13.839 sys 0m4.354s 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:13.839 ************************************ 00:31:13.839 END TEST nvmf_digest_clean 00:31:13.839 ************************************ 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.839 ************************************ 00:31:13.839 START TEST nvmf_digest_error 00:31:13.839 ************************************ 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=345327 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 345327 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 345327 ']' 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:13.839 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.839 [2024-11-06 12:37:45.398264] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:13.839 [2024-11-06 12:37:45.398320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.097 [2024-11-06 12:37:45.498956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.097 [2024-11-06 12:37:45.546830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.097 [2024-11-06 12:37:45.546871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.097 [2024-11-06 12:37:45.546883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.097 [2024-11-06 12:37:45.546895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.097 [2024-11-06 12:37:45.546903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.097 [2024-11-06 12:37:45.547633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.097 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:14.097 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.098 [2024-11-06 12:37:45.664269] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.098 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.356 null0 00:31:14.356 [2024-11-06 12:37:45.761685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.356 [2024-11-06 12:37:45.785897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=345479 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 345479 /var/tmp/bperf.sock 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 345479 ']' 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:14.356 12:37:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.356 [2024-11-06 12:37:45.844848] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:14.356 [2024-11-06 12:37:45.844907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345479 ] 00:31:14.356 [2024-11-06 12:37:45.911628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.356 [2024-11-06 12:37:45.951982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.615 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:14.615 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:14.615 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:14.615 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:14.872 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.128 nvme0n1 00:31:15.128 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:15.128 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.128 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:15.128 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.385 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:15.385 12:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:15.385 Running I/O for 2 seconds... 00:31:15.385 [2024-11-06 12:37:46.896671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.385 [2024-11-06 12:37:46.896701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.385 [2024-11-06 12:37:46.896710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.385 [2024-11-06 12:37:46.912042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.385 [2024-11-06 12:37:46.912065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.385 [2024-11-06 12:37:46.912074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.385 [2024-11-06 12:37:46.925409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.385 [2024-11-06 12:37:46.925429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.385 [2024-11-06 12:37:46.925437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.385 [2024-11-06 12:37:46.937973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.385 [2024-11-06 12:37:46.937992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.385 [2024-11-06 12:37:46.937999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.385 [2024-11-06 12:37:46.952220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.385 [2024-11-06 12:37:46.952239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.386 [2024-11-06 12:37:46.952247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.386 [2024-11-06 12:37:46.966540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.386 [2024-11-06 12:37:46.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.386 [2024-11-06 12:37:46.966567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.386 [2024-11-06 12:37:46.980951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.386 [2024-11-06 12:37:46.980970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.386 [2024-11-06 12:37:46.980978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.386 [2024-11-06 12:37:46.996256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.386 [2024-11-06 12:37:46.996275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.386 [2024-11-06 12:37:46.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.011017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.011036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.011044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.025271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.025291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.025298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.040811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.040831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.054869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.054889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.054897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.070297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.070317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.070324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.085768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.085796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.101449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.101476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.101484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.644 [2024-11-06 12:37:47.115826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.644 [2024-11-06 12:37:47.115845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.644 [2024-11-06 12:37:47.115853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.130200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.130224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.130231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.145215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.145236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.145243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.159054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.159073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.159080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.174197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.174224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.188299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.188318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.202854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.202874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.202882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.217277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.217296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.230424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.230444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.230451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.244811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.244837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.645 [2024-11-06 12:37:47.260237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.645 [2024-11-06 12:37:47.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.645 [2024-11-06 12:37:47.260264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.274817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.274837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.274845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.287665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.287684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.287692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.302075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.302095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.302103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.316486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.316507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.316515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.331574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.331594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.346390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.346410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.360967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.360986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.360994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.375177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.375196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.375207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.389429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.389448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.389456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.404281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.404299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.904 [2024-11-06 12:37:47.404307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.904 [2024-11-06 12:37:47.419314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.904 [2024-11-06 12:37:47.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.419340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.433635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.433654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.433662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.448608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.448627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.448634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.462786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.462806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.462813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.477157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.477176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.477184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.491597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.491616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.491624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:15.905 [2024-11-06 12:37:47.506779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:15.905 [2024-11-06 12:37:47.506805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.905 [2024-11-06 12:37:47.506812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.521031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.521052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.521060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.535579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.535598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.535606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.551193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.551213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.551221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.565283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.565302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.565310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.580872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.580892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.580901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.595187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.595208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.595215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.609526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.609545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.609552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.625665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.625685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.625693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.640585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.640605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.640613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.653675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.653695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.163 [2024-11-06 12:37:47.653702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.163 [2024-11-06 12:37:47.667907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.163 [2024-11-06 12:37:47.667926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.667934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.682366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.682386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.682393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.695241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.695260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.695268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.710030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.710049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.710057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.724340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.724359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.724367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.738203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.738222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.738230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.752940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.752959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.752970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.164 [2024-11-06 12:37:47.768036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.164 [2024-11-06 12:37:47.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.164 [2024-11-06 12:37:47.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.783241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.783261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.783268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.799141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.799167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.814031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.814050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.814058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.827793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.827819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.839445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.839470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.839478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.853590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.853609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.853617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 17339.00 IOPS, 67.73 MiB/s [2024-11-06T11:37:48.037Z] [2024-11-06 12:37:47.871365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.871383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.871390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.885841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.885860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.885868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.898863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.898881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.898889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.913258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.913277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.913284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.422 [2024-11-06 12:37:47.927044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.422 [2024-11-06 12:37:47.927063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.422 [2024-11-06 12:37:47.927071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:47.941446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:47.941471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:47.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:47.955817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:47.955836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:47.955843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:47.972489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:47.972507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:47.972515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:47.985418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:47.985438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:47.985445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:47.999524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:47.999543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:47.999554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:48.014246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:48.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:48.014272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.423 [2024-11-06 12:37:48.028614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.423 [2024-11-06 12:37:48.028638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.423 [2024-11-06 12:37:48.028646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.043556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.043576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.043583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.058382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.058401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.058409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.072572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.088286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.088313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.102308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.102327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.102334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.118057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.681 [2024-11-06 12:37:48.118077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.681 [2024-11-06 12:37:48.118084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.681 [2024-11-06 12:37:48.131726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.131749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.131757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.144769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.144789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.144796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.159157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.159176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.159183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.173382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.173401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.173408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.188970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.188988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.188996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.203204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.203222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.203229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.219769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.219788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.233207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.233226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.233233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.248398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.248417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.248425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.264307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.264326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.264333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.278672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.278691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.278699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.682 [2024-11-06 12:37:48.292514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.682 [2024-11-06 12:37:48.292533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.682 [2024-11-06 12:37:48.292540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.307211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.307229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.307237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.320701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.320720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.320728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.336065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.336084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.336091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.349570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.349590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.349597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.362695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.362714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.362721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.377182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.377214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.391431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.391450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.407600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.407619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.407626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.423075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.423094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.423102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.437338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.437357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.437364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.451736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.451756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.451763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.465471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.465490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.465497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.481044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.481064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.481071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.495395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.495414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.495421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.509723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.509742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.524135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.524154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.524162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.538444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.538476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.941 [2024-11-06 12:37:48.553069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:16.941 [2024-11-06 12:37:48.553090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.941 [2024-11-06 12:37:48.553098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.200 [2024-11-06 12:37:48.567767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.200 [2024-11-06 12:37:48.567786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.200 [2024-11-06 12:37:48.567794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.200 [2024-11-06 12:37:48.583676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.200 [2024-11-06 12:37:48.583696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.200 [2024-11-06 12:37:48.583704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.200 [2024-11-06 12:37:48.598110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.200 [2024-11-06 12:37:48.598130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.200 [2024-11-06 12:37:48.598137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.200 [2024-11-06 12:37:48.612557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.200 [2024-11-06 12:37:48.612577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.200 [2024-11-06 12:37:48.612584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.200 [2024-11-06 12:37:48.626931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.626952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.626963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.641309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.641329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.641337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.655694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.655715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.655723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.669379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.669398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.669406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.685480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.685500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.685508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.697104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.697124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.697131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.711810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.711829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.711837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.724783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.724805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.724813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.740758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.740786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.754220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.754251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.768391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.768410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.768417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.784410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.784436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.798916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.798936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.798944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.201 [2024-11-06 12:37:48.813320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.201 [2024-11-06 12:37:48.813340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-11-06 12:37:48.813348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.459 [2024-11-06 12:37:48.827806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.459 [2024-11-06 12:37:48.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.459 [2024-11-06 12:37:48.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.459 [2024-11-06 12:37:48.841122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.459 [2024-11-06 12:37:48.841143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.459 [2024-11-06 12:37:48.841150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.459 [2024-11-06 12:37:48.856850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.459 [2024-11-06 12:37:48.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.459 [2024-11-06 12:37:48.856878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.459 17465.00 IOPS, 68.22 MiB/s [2024-11-06T11:37:49.074Z] [2024-11-06 12:37:48.872371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x604860) 00:31:17.459 [2024-11-06 12:37:48.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.459 [2024-11-06 12:37:48.872396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.459 00:31:17.459 Latency(us) 00:31:17.459 [2024-11-06T11:37:49.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:17.459 nvme0n1 : 2.00 17485.58 68.30 0.00 0.00 7311.90 3649.16 23116.33 00:31:17.459 [2024-11-06T11:37:49.074Z] =================================================================================================================== 00:31:17.459 [2024-11-06T11:37:49.074Z] Total : 17485.58 68.30 0.00 0.00 7311.90 3649.16 23116.33 00:31:17.459 { 00:31:17.459 "results": [ 00:31:17.459 { 00:31:17.459 "job": "nvme0n1", 00:31:17.459 "core_mask": "0x2", 00:31:17.459 "workload": "randread", 00:31:17.459 "status": "finished", 00:31:17.460 "queue_depth": 128, 00:31:17.460 "io_size": 4096, 00:31:17.460 "runtime": 2.004966, 00:31:17.460 "iops": 17485.58329667436, 00:31:17.460 "mibps": 68.30305975263421, 00:31:17.460 "io_failed": 0, 00:31:17.460 "io_timeout": 0, 00:31:17.460 "avg_latency_us": 7311.896697317174, 00:31:17.460 "min_latency_us": 3649.163636363636, 00:31:17.460 "max_latency_us": 23116.334545454545 00:31:17.460 } 00:31:17.460 ], 00:31:17.460 "core_count": 1 00:31:17.460 } 00:31:17.460 12:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:17.460 12:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:17.460 12:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:17.460 | .driver_specific 00:31:17.460 | .nvme_error 00:31:17.460 | .status_code 00:31:17.460 | .command_transient_transport_error' 00:31:17.460 12:37:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 345479 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 345479 ']' 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 345479 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 345479 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 345479' 00:31:17.718 killing process with pid 345479 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 345479 00:31:17.718 Received shutdown signal, test time was about 2.000000 seconds 00:31:17.718 00:31:17.718 Latency(us) 00:31:17.718 [2024-11-06T11:37:49.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.718 [2024-11-06T11:37:49.333Z] =================================================================================================================== 00:31:17.718 [2024-11-06T11:37:49.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:17.718 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 345479 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=346137 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 346137 /var/tmp/bperf.sock 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 346137 ']' 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:17.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:17.976 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:17.976 [2024-11-06 12:37:49.451292] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:17.976 [2024-11-06 12:37:49.451354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346137 ] 00:31:17.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:17.976 Zero copy mechanism will not be used. 00:31:17.976 [2024-11-06 12:37:49.517120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.976 [2024-11-06 12:37:49.550908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.242 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:18.242 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:18.242 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:18.242 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:18.501 12:37:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.068 nvme0n1 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:19.068 12:37:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:19.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:19.068 Zero copy mechanism will not be used. 00:31:19.068 Running I/O for 2 seconds... 00:31:19.068 [2024-11-06 12:37:50.563393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.570044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.570070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.570078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.576702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.576726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.576734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.583361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.583383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.583391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.590009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.590030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.590038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.596573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.596595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.596603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.603170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.603200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.609739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.609760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.609771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.616297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.616318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.616326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.622891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.622912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.622920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.629488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.629509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.629517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.636172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.636193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.636201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.642714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.642735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.642742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.068 [2024-11-06 12:37:50.649313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.068 [2024-11-06 12:37:50.649335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.068 [2024-11-06 12:37:50.649342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.069 [2024-11-06 12:37:50.655895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.069 [2024-11-06 12:37:50.655915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.069 [2024-11-06 12:37:50.655923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.069 [2024-11-06 12:37:50.662546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.069 [2024-11-06 12:37:50.662567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.069 [2024-11-06 12:37:50.662575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.069 [2024-11-06 12:37:50.669083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.069 [2024-11-06 12:37:50.669104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.069 [2024-11-06 12:37:50.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.069 [2024-11-06 12:37:50.675677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.069 [2024-11-06 12:37:50.675699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.069 [2024-11-06 12:37:50.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.069 [2024-11-06 12:37:50.682265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.069 [2024-11-06 12:37:50.682286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.069 [2024-11-06 12:37:50.682294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.327 [2024-11-06 12:37:50.688959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.688980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.695570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.695591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.695600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.702187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.702207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.702214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.708766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.715294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.715315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.715323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.721856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.721877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.721888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.728442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.728467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.728475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.735002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.735022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.741549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.741571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.741578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.748115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.748135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.748143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.754667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.754694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.761244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.761265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.761273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.767757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.767777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.767784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.774299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.774321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.774328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.780835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.780867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.787344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.787364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.787372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.793895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.793916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.793923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.800475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.800495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.800503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.807064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.807085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.807093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.813687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.813708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.813715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.820316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.820337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.820345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.827029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.827049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.827057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.833614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.833635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.833643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.840207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.840228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.840236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.846795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.846815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.846822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.853315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.328 [2024-11-06 12:37:50.853336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.328 [2024-11-06 12:37:50.853343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.328 [2024-11-06 12:37:50.859895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.859916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.859923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.866509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.866530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.866538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.873105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.873125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.873134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.879679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.879699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.879707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.886245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.886266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.886274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.892851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.892872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.892883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.899402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.899422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.899429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.905954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.905976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.905984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.912544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.912565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.919098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.919119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.919126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.925648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.925676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.932186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.932207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.932214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.329 [2024-11-06 12:37:50.938728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.329 [2024-11-06 12:37:50.938748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.329 [2024-11-06 12:37:50.938755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.587 [2024-11-06 12:37:50.945384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.587 [2024-11-06 12:37:50.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.587 [2024-11-06 12:37:50.945412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.587 [2024-11-06 12:37:50.952055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.587 [2024-11-06 12:37:50.952080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.587 [2024-11-06 12:37:50.952087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.587 [2024-11-06 12:37:50.958651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.958672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.958679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.965345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.965367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.965375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.971965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.971987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.978616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.978637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.985210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.985231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.985238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.991775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.991797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:50.998348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:50.998369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:50.998376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.004901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.004922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.004929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.011496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.011517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.011524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.018072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.018093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.024625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.024646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.024654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.031233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.031254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.031262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.037838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.037859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.037867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.044427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.044447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.044455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.051028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.051050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.051058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.057599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.057619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.057627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.064201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.064234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.070861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.070881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.077504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.077525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.077533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.084086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.084107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.084115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.090613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.090633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.090641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.097127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.097148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.097156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.103690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.103711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.103719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.110243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.110264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.110272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.116801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.116820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.116828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.124074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.124095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.588 [2024-11-06 12:37:51.124104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.588 [2024-11-06 12:37:51.130732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.588 [2024-11-06 12:37:51.130753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.130760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.137311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.137332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.137340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.144222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.144243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.151878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.151900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.151908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.159958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.159981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.167454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.167479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.167487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.174141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.174170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.180764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.180785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.180796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.187356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.187377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.187385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.194021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.194051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.589 [2024-11-06 12:37:51.200754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.589 [2024-11-06 12:37:51.200777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.589 [2024-11-06 12:37:51.200785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.208271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.208293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.208302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.215721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.215743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.215750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.223021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.223043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.223050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.230268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.230288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.230296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.237287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.237307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.237315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.244106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.244130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.251097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.251118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.251126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.258279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.258300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.258308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.265440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.265466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.265474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.271838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.271859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.271867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.278210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.278231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.278239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.284570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.284591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.284599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.290915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.290936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.290943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.297223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.297244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.297252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.303667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.303689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.303696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.310266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.310287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.310294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.316871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.316891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.848 [2024-11-06 12:37:51.316899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.848 [2024-11-06 12:37:51.323511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.848 [2024-11-06 12:37:51.323532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.323539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.330132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.330152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.330160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.336681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.336702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.336710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.343294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.343314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.343323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.349964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.349984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.349991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.356541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.356562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.356573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.363231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.363252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.363260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.369842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.369863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.369870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.376488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.376509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.376518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.383167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.383188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.383195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.389746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.389767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.389774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.396340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.396361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.396369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.402952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.402973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.402980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.409684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.409705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.409712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.416475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.423407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.423428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.423435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.430154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.430175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.430183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.436969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.436999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.443792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.443813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.443821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.450614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.450634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.450641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.849 [2024-11-06 12:37:51.457523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:19.849 [2024-11-06 12:37:51.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-11-06 12:37:51.457552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.108 [2024-11-06 12:37:51.464469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.108 [2024-11-06 12:37:51.464490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.108 [2024-11-06 12:37:51.464500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.108 [2024-11-06 12:37:51.471387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.108 [2024-11-06 12:37:51.471408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.471415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.478234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.478256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.478264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.485369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.485391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.485400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.492966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.492988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.492996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.499505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.499526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.499535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.506200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.506221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.506229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.512703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.512724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.512732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.519192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.519220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.525832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.525853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.525861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.532700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.532721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.539578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.539599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.539607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.546401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.546423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.546430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.553266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.553287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.553294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 4603.00 IOPS, 575.38 MiB/s [2024-11-06T11:37:51.724Z] [2024-11-06 12:37:51.561353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.561375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.561383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.568018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.568038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.568046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.571749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.571770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.571779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.578390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.578412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.578420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.584918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.584939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.584947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.591260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.591281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.591289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.597949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.597971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.597980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.604631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.604653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.604661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.611226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.611248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.611255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.617585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.617606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.617614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.624010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.624031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.630407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.630430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.630437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.636755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.636776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.636783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.109 [2024-11-06 12:37:51.643042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.109 [2024-11-06 12:37:51.643063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.109 [2024-11-06 12:37:51.643075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.649364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.649385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.649392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.655790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.655811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.655820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.662248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.662270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.662278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.668760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.668781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.668788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.675390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.682015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.682037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.682044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.688603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.688633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.695233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.695254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.695262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.701694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.701728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.708179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.708199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.714601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.714622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.714630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.110 [2024-11-06 12:37:51.721070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.110 [2024-11-06 12:37:51.721090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.110 [2024-11-06 12:37:51.721098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.727727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.727748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.727755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.734338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.734358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.734366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.740898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.740918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.740926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.747464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.747484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.754042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.754063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.754070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.760651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.760671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.767300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.767321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.767329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.773585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.773613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.780067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.780088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.780097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.786544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.786565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.786573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.793069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.793099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.799598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.799618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.799625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.806129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.806150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.806157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.812503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.812523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.812535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.819011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.819032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.819039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.825692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.825712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.825721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.832263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.832283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.369 [2024-11-06 12:37:51.832291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.369 [2024-11-06 12:37:51.838969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.369 [2024-11-06 12:37:51.838990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.845618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.845639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.845647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.852186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.852207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.852215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.858672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.858692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.858699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.865106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.865128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.865136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.871626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.871651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.871659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.878131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.878152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.878160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.884704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.884733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.891428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.898034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.898055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.898063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.904683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.904705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.904712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.911552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.911573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.911581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.918368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.918388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.918396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.925011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.925032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.931401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.931422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.931430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.937534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.937555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.937563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.943914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.943934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.943942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.950132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.950161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.956300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.956321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.956328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.962657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.962678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.962685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.969287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.969307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.969314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.975872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.975894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.975901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.370 [2024-11-06 12:37:51.982410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.370 [2024-11-06 12:37:51.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.370 [2024-11-06 12:37:51.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:51.989023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:51.989045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:51.989052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:51.995573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:51.995593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:51.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.002080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.002109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.008630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.008651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.008658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.015951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.015973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.015980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.024375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.024398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.024407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.032610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.032640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.039592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.039613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.039620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.046088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.046109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.046117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.052399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.052421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.058734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.058755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.058764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.064988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.065009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.065016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.071545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.071565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.071572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.078212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.078233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.078241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.084714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.084736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.084743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.091271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.091293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.091300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.097795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.097816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.097828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.104335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.104362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.110826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.110847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.110854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.117400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.117420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.117427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.123864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.123884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-06 12:37:52.123892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-06 12:37:52.130427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.630 [2024-11-06 12:37:52.130448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.130455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.136939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.136960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.136967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.143429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.143450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.150070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.150091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.150099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.156669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.156694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.156702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.163198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.163219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.163226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.169847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.169868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.169875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.176613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.176635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.176643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.183604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.183625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.183632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.190409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.190430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.197158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.197179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.197187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.203957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.203979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.203986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.210714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.210735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.210743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.217672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.217700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.224526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.224547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.231220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.231241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.231248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.237703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.237725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.237733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.631 [2024-11-06 12:37:52.244249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.631 [2024-11-06 12:37:52.244271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.631 [2024-11-06 12:37:52.244279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.250841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.250863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.250871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.257406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.257427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.257434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.263969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.263990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.263998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.270510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.270530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.270541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.277126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.277146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.277153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.890 [2024-11-06 12:37:52.283548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.890 [2024-11-06 12:37:52.283569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.890 [2024-11-06 12:37:52.283576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.290118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.290138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.290145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.296736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.296756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.296763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.303237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.303258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.309714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.309734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.309741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.316431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.316452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.316464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.323107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.323135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.329921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.329945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.329953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.336743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.336764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.336771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.343367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.343387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.343395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.349916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.349937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.349945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.356424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.356445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.356452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.362912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.362932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.362940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.369475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.369495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.369504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.375979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.376000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.376008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.382344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.382364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.382373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.388555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.388575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.388583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.394862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.394883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.394891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.401251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.401272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.401279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.407517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.407538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.407546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.413898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.413919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.413926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.420503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.420524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.420531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.426998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.427018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.427026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.433671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.433692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.433699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.440395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.440416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.440426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.446878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.446900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.446907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.453402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.453423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.453432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.459971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.891 [2024-11-06 12:37:52.459993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.891 [2024-11-06 12:37:52.460001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.891 [2024-11-06 12:37:52.466377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.466397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.466404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.472873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.472894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.472901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.479683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.479704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.479711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.486211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.486232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.486240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.492610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.492631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.492639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.499233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.499254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.499262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.892 [2024-11-06 12:37:52.505943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:20.892 [2024-11-06 12:37:52.505965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-06 12:37:52.505973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.512652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.512672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.519301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.519322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.519329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.525854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.525875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.525882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.532377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.532398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.532407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.539016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.539036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.539043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.545734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.545755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.545762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.150 [2024-11-06 12:37:52.552361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.552381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.552394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.150 4659.50 IOPS, 582.44 MiB/s [2024-11-06T11:37:52.765Z] [2024-11-06 12:37:52.559972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1919a60) 00:31:21.150 [2024-11-06 12:37:52.559993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.150 [2024-11-06 12:37:52.560001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.150 00:31:21.150 Latency(us) 00:31:21.150 [2024-11-06T11:37:52.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:21.150 nvme0n1 : 2.00 4660.36 582.55 0.00 0.00 3430.11 1310.72 14417.92 00:31:21.150 [2024-11-06T11:37:52.765Z] =================================================================================================================== 00:31:21.150 [2024-11-06T11:37:52.765Z] Total : 4660.36 582.55 0.00 0.00 3430.11 1310.72 14417.92 00:31:21.150 { 00:31:21.150 "results": [ 00:31:21.150 { 00:31:21.150 "job": "nvme0n1", 00:31:21.150 "core_mask": "0x2", 00:31:21.150 "workload": "randread", 00:31:21.150 "status": "finished", 00:31:21.150 "queue_depth": 16, 00:31:21.150 "io_size": 131072, 00:31:21.150 "runtime": 2.003064, 00:31:21.150 "iops": 4660.360327977538, 00:31:21.150 "mibps": 582.5450409971922, 00:31:21.150 "io_failed": 0, 00:31:21.150 "io_timeout": 0, 00:31:21.150 "avg_latency_us": 3430.106117933486, 00:31:21.150 "min_latency_us": 1310.72, 00:31:21.150 "max_latency_us": 14417.92 00:31:21.150 } 00:31:21.150 ], 00:31:21.150 "core_count": 1 00:31:21.150 } 00:31:21.150 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:21.150 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:21.150 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:21.150 | .driver_specific 00:31:21.150 | .nvme_error 00:31:21.150 | .status_code 00:31:21.150 | .command_transient_transport_error' 00:31:21.150 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 302 > 0 )) 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 346137 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 346137 ']' 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 346137 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 346137 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 346137' 00:31:21.409 killing process with pid 346137 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 346137 00:31:21.409 Received shutdown signal, test time was about 2.000000 seconds 00:31:21.409 00:31:21.409 Latency(us) 00:31:21.409 [2024-11-06T11:37:53.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.409 [2024-11-06T11:37:53.024Z] =================================================================================================================== 00:31:21.409 [2024-11-06T11:37:53.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:21.409 12:37:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 346137 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=346678 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 346678 /var/tmp/bperf.sock 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 346678 ']' 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:21.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.667 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:21.667 [2024-11-06 12:37:53.133210] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:21.667 [2024-11-06 12:37:53.133273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346678 ] 00:31:21.667 [2024-11-06 12:37:53.199484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.667 [2024-11-06 12:37:53.239844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.925 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.925 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:21.925 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:21.925 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:22.182 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:22.183 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.183 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:22.183 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.183 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.183 12:37:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:22.749 nvme0n1 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:22.749 12:37:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:22.749 Running I/O for 2 seconds... 00:31:22.749 [2024-11-06 12:37:54.284348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.284543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.284568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:22.749 [2024-11-06 12:37:54.299005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.299185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.299206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:22.749 [2024-11-06 12:37:54.313731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.313904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.313922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:22.749 [2024-11-06 12:37:54.328359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.328538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.328556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:22.749 [2024-11-06 12:37:54.343054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.343224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.343241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:22.749 [2024-11-06 12:37:54.357714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:22.749 [2024-11-06 12:37:54.357883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:22.749 [2024-11-06 12:37:54.357900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.372603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.372789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.372810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.387280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.387449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.387470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.401925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.402095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.416735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.416905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.431355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.431535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.431552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.446004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.446174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.446191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.460653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.460825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.460842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.475283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.475452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.475473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.490182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.490354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.490371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.504811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.504991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.505007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.519457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.519636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.519653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.534081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.534268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.548713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.548899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.563342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.563521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.563538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.577966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.578137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.592576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.592747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.607197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.607368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.607384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.008 [2024-11-06 12:37:54.621922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.008 [2024-11-06 12:37:54.622097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.008 [2024-11-06 12:37:54.622114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.636732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.636906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.636922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.651402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.651582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.651601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.666042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.666211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.666227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.680744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.680916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.680934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.695386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.695566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.695583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.710058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.710228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.710245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.724701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.724871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.724888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.739332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.739507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.754011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.754178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.754198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.768659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.768833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.768849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.783332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.783511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.783527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.797994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.798163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.798179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.812673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.812846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.812862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.827369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.827549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.827566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.842029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.842200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.856667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.856837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.856854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.267 [2024-11-06 12:37:54.871289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.267 [2024-11-06 12:37:54.871465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.267 [2024-11-06 12:37:54.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.886167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.886347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.886364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.900847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.901021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.901038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.915561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.915733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.915750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.930175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.930344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.930360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.944875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.945044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.945061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.959491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.959661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.959678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.974169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.974355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:54.988830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:54.988999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:54.989016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.003489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.003657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.003673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.018147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.018315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.018332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.032785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.032952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.032969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.047441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.047616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.047633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.062137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.062306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.062323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.076807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.076975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.076992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.091449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.091628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.091656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.106152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.106320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.106337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.120799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.120971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.120989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.526 [2024-11-06 12:37:55.135456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.526 [2024-11-06 12:37:55.135634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.526 [2024-11-06 12:37:55.135658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.784 [2024-11-06 12:37:55.150396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.784 [2024-11-06 12:37:55.150581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.784 [2024-11-06 12:37:55.150598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.784 [2024-11-06 12:37:55.165067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.784 [2024-11-06 12:37:55.165229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.784 [2024-11-06 12:37:55.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.784 [2024-11-06 12:37:55.179725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.784 [2024-11-06 12:37:55.179885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.784 [2024-11-06 12:37:55.179902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.784 [2024-11-06 12:37:55.194572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.784 [2024-11-06 12:37:55.194737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.784 [2024-11-06 12:37:55.194754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.209262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.209430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.209448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.223901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.224071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.224087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.238541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.238712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.238729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.253178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.253347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.253363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 17266.00 IOPS, 67.45 MiB/s [2024-11-06T11:37:55.400Z] [2024-11-06 12:37:55.267832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.268007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.268024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.282488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.282658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.282675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.297111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.297279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.297296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.311778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.311947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.311967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.326415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.326596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.326612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.341079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.341247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.341265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.355707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.355877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.355893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.370364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.370541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.370557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.384998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.385183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:23.785 [2024-11-06 12:37:55.399751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:23.785 [2024-11-06 12:37:55.399926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.785 [2024-11-06 12:37:55.399942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.414585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.414757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.414774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.429338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.429524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.429541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.443993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.444162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.444178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.458628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.458797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.458814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.473233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.473400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.473417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.488120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.488291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.488308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.502754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.502926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.502943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.517427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.517611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.517631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.532053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.532223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.043 [2024-11-06 12:37:55.532239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.043 [2024-11-06 12:37:55.546713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.043 [2024-11-06 12:37:55.546883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.546900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.561314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.561483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.561499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.575994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.576161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.576177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.590602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.590773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.590789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.605274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.605441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.605457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.619943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.620112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.620128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.634598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.634786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.044 [2024-11-06 12:37:55.649220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.044 [2024-11-06 12:37:55.649395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.044 [2024-11-06 12:37:55.649412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.664171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.664346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.664373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.678831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.679003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.693467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.693637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.693655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.708148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.708317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.722775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.722946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.722963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.737439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.737616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.737633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.752069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.752238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.766719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.766886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.766902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.781329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.781506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.781523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.795974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.796144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.796161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.810620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.810789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.810806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.825247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.825418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.825434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.839908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.840076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.840092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.854537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.854707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.854723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.869166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.869336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.302 [2024-11-06 12:37:55.869352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.302 [2024-11-06 12:37:55.883795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.302 [2024-11-06 12:37:55.883963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.303 [2024-11-06 12:37:55.883979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.303 [2024-11-06 12:37:55.898432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.303 [2024-11-06 12:37:55.898612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.303 [2024-11-06 12:37:55.898632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.303 [2024-11-06 12:37:55.913078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.303 [2024-11-06 12:37:55.913248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.303 [2024-11-06 12:37:55.913264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:55.928007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:55.928180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:55.928196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:55.942671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:55.942843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:55.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:55.957292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:55.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:55.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:55.971945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:55.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:55.972130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:55.986581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:55.986751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:55.986767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.001224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.001391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.001407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.015853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.016021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.030508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.030694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.045143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.045310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.045327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.059777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.059946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.059962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.074401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.074579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.074595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.089015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.089183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.089199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.103654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.103823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.103840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.118274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.118442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.118464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.132890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.133059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.133075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.147562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.147732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.147752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.162213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.162382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.561 [2024-11-06 12:37:56.176932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.561 [2024-11-06 12:37:56.177105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.561 [2024-11-06 12:37:56.177123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.191726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.819 [2024-11-06 12:37:56.191901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.819 [2024-11-06 12:37:56.191918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.206453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.819 [2024-11-06 12:37:56.206635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.819 [2024-11-06 12:37:56.206652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.221270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.819 [2024-11-06 12:37:56.221443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.819 [2024-11-06 12:37:56.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.235977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.819 [2024-11-06 12:37:56.236146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.819 [2024-11-06 12:37:56.236162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.250593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.819 [2024-11-06 12:37:56.250764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.819 [2024-11-06 12:37:56.250781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.819 [2024-11-06 12:37:56.265281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca1b0) with pdu=0x2000166fda78 00:31:24.820 [2024-11-06 12:37:56.265450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.820 [2024-11-06 12:37:56.265472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.820 17346.50 IOPS, 67.76 MiB/s 00:31:24.820 Latency(us) 00:31:24.820 [2024-11-06T11:37:56.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.820 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.820 nvme0n1 : 2.01 17348.14 67.77 0.00 0.00 7362.87 6434.44 16086.11 00:31:24.820 [2024-11-06T11:37:56.435Z] =================================================================================================================== 00:31:24.820 [2024-11-06T11:37:56.435Z] Total : 17348.14 67.77 0.00 0.00 7362.87 6434.44 16086.11 00:31:24.820 { 00:31:24.820 "results": [ 00:31:24.820 { 00:31:24.820 "job": "nvme0n1", 00:31:24.820 "core_mask": "0x2", 00:31:24.820 "workload": "randwrite", 00:31:24.820 "status": "finished", 00:31:24.820 "queue_depth": 128, 00:31:24.820 "io_size": 4096, 00:31:24.820 "runtime": 2.008111, 00:31:24.820 "iops": 17348.14459957642, 00:31:24.820 "mibps": 67.76618984209539, 00:31:24.820 "io_failed": 0, 00:31:24.820 "io_timeout": 0, 00:31:24.820 "avg_latency_us": 7362.867167666562, 00:31:24.820 "min_latency_us": 6434.443636363636, 00:31:24.820 "max_latency_us": 16086.10909090909 00:31:24.820 } 00:31:24.820 ], 00:31:24.820 "core_count": 1 00:31:24.820 } 00:31:24.820 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:24.820 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:24.820 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:24.820 | .driver_specific 00:31:24.820 | .nvme_error 00:31:24.820 | .status_code 00:31:24.820 | .command_transient_transport_error' 00:31:24.820 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 346678 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 346678 ']' 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 346678 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 346678 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 346678' 00:31:25.078 killing process with pid 346678 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 346678 00:31:25.078 Received shutdown signal, test time was about 2.000000 seconds 00:31:25.078 00:31:25.078 Latency(us) 00:31:25.078 [2024-11-06T11:37:56.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.078 [2024-11-06T11:37:56.693Z] =================================================================================================================== 00:31:25.078 [2024-11-06T11:37:56.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.078 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 346678 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=347462 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 347462 /var/tmp/bperf.sock 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 347462 ']' 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:25.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:25.336 12:37:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:25.336 [2024-11-06 12:37:56.843425] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:25.336 [2024-11-06 12:37:56.843494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347462 ] 00:31:25.336 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:25.336 Zero copy mechanism will not be used. 00:31:25.336 [2024-11-06 12:37:56.908818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.336 [2024-11-06 12:37:56.945455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.594 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:25.594 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:31:25.594 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:25.594 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:25.852 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:25.852 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.852 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:25.852 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.853 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:25.853 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:26.110 nvme0n1 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:26.110 12:37:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:26.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:26.369 Zero copy mechanism will not be used. 00:31:26.369 Running I/O for 2 seconds... 00:31:26.369 [2024-11-06 12:37:57.854392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.854597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.861760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.861909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.861930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.869184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.869325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.869344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.876700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.876816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.876834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.883625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.883714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.883732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.890124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.890226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.890245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.897366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.897509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.897527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.904808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.904916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.904934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.912002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.912070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.912088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.919224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.919294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.919312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.926299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.926393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.933150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.933243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.369 [2024-11-06 12:37:57.933261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.369 [2024-11-06 12:37:57.940100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.369 [2024-11-06 12:37:57.940203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.940220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.947151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.947259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.947277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.954338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.954437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.954455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.961161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.961256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.961274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.967832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.967978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.968001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.974763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.974829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.974847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.370 [2024-11-06 12:37:57.981835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.370 [2024-11-06 12:37:57.982169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.370 [2024-11-06 12:37:57.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:57.989032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:57.989199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:57.989218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:57.995190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:57.995296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:57.995314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.000986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.001102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.001120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.006668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.006737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.006755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.012501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.012588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.012607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.018257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.018364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.018382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.024157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.024244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.024262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.030035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.030145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.036298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.036446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.036470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.042790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.042951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.049624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.049743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.049761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.056753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.056833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.056851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.629 [2024-11-06 12:37:58.063604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.629 [2024-11-06 12:37:58.063724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.629 [2024-11-06 12:37:58.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.071162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.071229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.071248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.077894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.077999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.078017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.084056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.084182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.084200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.089860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.089952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.089971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.095582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.095713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.095731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.101478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.101597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.101615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.107768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.107905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.107923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.114198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.114350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.114368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.120450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.120596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.120614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.126842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.126954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.126972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.133156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.133293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.133315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.139446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.139587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.139605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.146206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.146393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.152664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.152792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.159069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.159235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.165441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.165577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.165595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.171896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.172044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.172062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.178383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.178535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.178553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.184717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.184835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.184852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.191026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.191171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.191189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.197599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.197749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.197767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.204115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.204263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.204281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.210082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.210219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.210237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.215694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.215795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.221245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.221361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.221380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.226829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.226918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.226936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.232314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.630 [2024-11-06 12:37:58.232449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.630 [2024-11-06 12:37:58.232472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.630 [2024-11-06 12:37:58.238395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.631 [2024-11-06 12:37:58.238533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.631 [2024-11-06 12:37:58.238551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.631 [2024-11-06 12:37:58.244766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.631 [2024-11-06 12:37:58.244895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.631 [2024-11-06 12:37:58.244913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.251225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.251357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.251374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.257501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.257645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.263939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.264066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.264084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.270236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.270380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.276590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.276750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.276767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.282958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.283089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.289421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.289577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.289595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.295795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.295945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.302125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.302252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.302270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.308557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.308715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.308732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.314827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.314957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.314975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.321201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.321340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.327546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.327690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.327708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.334094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.334231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.334249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.340630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.340774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.340792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.347247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.347379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.347396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.354863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.355040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.362092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.362210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.362228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.368221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.368307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.368325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.890 [2024-11-06 12:37:58.374042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.890 [2024-11-06 12:37:58.374159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.890 [2024-11-06 12:37:58.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.379971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.380065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.380083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.386381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.386444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.386469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.392871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.393011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.393030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.398749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.398858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.404767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.404831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.404849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.410601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.410729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.416448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.416587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.416604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.422348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.422467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.422485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.428188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.428278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.428296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.434006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.434090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.434108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.439753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.439842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.445530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.445633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.445651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.451853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.451975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.451992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.458703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.458945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.458965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.465153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.465288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.465305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.471504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.471649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.471667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.477842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.477976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.477995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.484495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.484637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.484656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.490853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.491006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.497585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.497741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:26.891 [2024-11-06 12:37:58.504045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:26.891 [2024-11-06 12:37:58.504160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.891 [2024-11-06 12:37:58.504177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.510654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.510867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.510885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.517022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.517165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.517182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.523508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.523643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.529955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.530129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.530147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.535719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.535840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.535859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.541442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.541566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.541583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.547212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.547317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.547335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.552930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.553033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.553051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.558602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.558671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.558689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.564272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.569955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.570044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.570062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.575599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.575727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.575745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.581946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.582064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.582082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.588183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.588267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.588285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.593775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.593913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.593932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.599619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.599705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.599723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.605687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.605813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.611401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.611490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.611508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.617073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.617213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.617235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.622661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.622787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.622805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.628314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.628437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.628455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.633879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.633971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.633989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.639486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.639573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.639591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.645034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.645161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.645178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.651047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.151 [2024-11-06 12:37:58.651175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.151 [2024-11-06 12:37:58.651194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.151 [2024-11-06 12:37:58.657384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.657532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.657549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.663770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.663896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.663913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.670078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.670236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.670253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.676639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.676754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.676772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.683287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.683407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.683426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.690692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.690816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.690834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.698238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.698370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.698388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.706151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.706332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.714809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.714927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.714945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.722962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.723067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.723084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.731272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.731407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.731426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.740246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.740395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.748515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.748673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.748691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.756836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.756954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.756973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.152 [2024-11-06 12:37:58.765942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.152 [2024-11-06 12:37:58.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.152 [2024-11-06 12:37:58.766101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.411 [2024-11-06 12:37:58.774538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.411 [2024-11-06 12:37:58.774684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.411 [2024-11-06 12:37:58.774701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.411 [2024-11-06 12:37:58.782850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.411 [2024-11-06 12:37:58.782977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.411 [2024-11-06 12:37:58.782996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.411 [2024-11-06 12:37:58.791441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.411 [2024-11-06 12:37:58.791572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.411 [2024-11-06 12:37:58.791590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.411 [2024-11-06 12:37:58.799527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.411 [2024-11-06 12:37:58.799622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.411 [2024-11-06 12:37:58.799640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.411 [2024-11-06 12:37:58.806347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.806484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.806506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.812115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.812265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.812283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.817831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.817951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.817969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.823469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.823545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.823563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.829127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.829230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.829248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.834736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.834827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.834845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.840299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.840416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.840434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.845906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.846005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.846024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 4744.00 IOPS, 593.00 MiB/s [2024-11-06T11:37:59.027Z] [2024-11-06 12:37:58.852190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.852278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.852297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.857761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.857883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.857900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.863278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.863424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.863442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.868889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.868970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.874511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.874598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.874616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.880078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.880173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.880191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.885684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.885772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.885790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.891282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.891353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.891370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.896856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.896948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.896966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.902440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.902539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.902557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.908020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.908096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.908114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.913588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.913693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.919238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.919320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.919339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.924813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.924927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.924944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.930376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.930474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.930492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.935995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.936062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.936080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.941606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.941669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.947417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.947514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.947547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.412 [2024-11-06 12:37:58.953861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.412 [2024-11-06 12:37:58.953937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.412 [2024-11-06 12:37:58.953959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.959623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.959721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.965334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.965406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.965423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.971097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.971196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.971214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.976771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.976870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.976888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.982372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.982537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.982555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.988269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.988368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.988386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.993780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.993884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:58.999307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:58.999399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:58.999417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:59.004806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:59.004894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:59.004913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:59.010386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:59.010464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:59.010481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:59.015925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:59.016011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:59.016029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:59.021472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:59.021543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:59.021560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.413 [2024-11-06 12:37:59.027085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.413 [2024-11-06 12:37:59.027190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.413 [2024-11-06 12:37:59.027208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.032824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.032906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.032925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.038369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.038472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.038489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.044012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.044081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.044100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.049513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.049591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.049608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.055033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.055142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.055160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.060507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.060611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.060628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.065975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.066069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.066088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.071486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.071569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.071586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.076978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.077063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.077080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.082524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.082611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.088034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.088130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.088147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.093519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.093610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.099022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.099122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.099143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.104572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.104662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.104680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.110096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.110187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.110205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.115606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.115706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.115724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.121106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.121173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.121191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.126650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.126724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.132159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.132247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.137660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.137747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.137765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.143164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.143248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.148746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.148839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.148857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.154280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.672 [2024-11-06 12:37:59.154365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.672 [2024-11-06 12:37:59.154384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.672 [2024-11-06 12:37:59.159895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.159983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.160001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.165429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.165580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.165598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.170870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.170989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.176985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.177157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.177175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.183225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.183351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.183369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.189491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.189614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.189632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.195789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.195918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.195936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.202075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.202236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.202254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.208530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.208775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.208793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.214376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.214476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.219876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.225356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.225497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.230869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.230950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.230969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.236199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.236359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.236378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.242539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.242676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.242694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.248486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.248626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.248647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.254111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.254206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.254225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.259602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.259726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.259744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.265167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.265275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.265293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.270642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.270764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.270781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.276393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.276549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.276567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.673 [2024-11-06 12:37:59.282705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.673 [2024-11-06 12:37:59.282838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.673 [2024-11-06 12:37:59.282856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.289065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.289211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.289230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.295497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.295631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.295650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.301712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.301852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.301870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.307342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.307431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.307449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.312930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.313039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.313057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.318513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.318615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.318633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.324127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.324217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.329687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.329787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.329805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.335659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.335804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.335822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.341865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.341997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.347365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.347488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.352941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.353028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.932 [2024-11-06 12:37:59.358359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.932 [2024-11-06 12:37:59.358446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.932 [2024-11-06 12:37:59.358472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.363955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.364090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.369514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.369638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.369657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.375935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.376111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.376129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.381638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.381783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.387138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.387238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.387255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.392670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.392748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.392766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.398250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.398319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.398342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.403803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.403884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.403902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.409326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.409438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.414771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.414904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.414922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.420821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.420979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.420997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.427148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.427299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.427317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.433506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.433632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.433650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.439833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.439968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.439986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.446246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.446373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.446391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.452877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.453046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.453064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.460785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.460886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.460903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.467656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.467824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.467842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.474446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.474644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.474662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.481891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.482027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.482046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.488473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.488582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.488599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.494592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.494720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.494739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.500953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.501083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.501101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.507141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.507232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.507250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.514532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.514692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.514710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.521699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.521791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.521809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.529376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.933 [2024-11-06 12:37:59.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.933 [2024-11-06 12:37:59.529579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.933 [2024-11-06 12:37:59.536261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.934 [2024-11-06 12:37:59.536381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.934 [2024-11-06 12:37:59.536400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.934 [2024-11-06 12:37:59.542772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:27.934 [2024-11-06 12:37:59.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.934 [2024-11-06 12:37:59.542933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.549171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.549308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.549326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.555706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.555837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.555856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.562077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.562209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.562227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.568442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.568588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.568610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.574810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.574935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.574955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.581157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.581292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.581310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.587716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.587863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.587880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.594043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.594173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.594191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.600418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.600561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.606735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.606863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.606882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.613112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.613282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.619856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.620039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.627608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.627755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.627772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.634325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.634534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.640050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.640270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.640289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.645471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.645695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.645713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.650961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.651195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.651213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.657078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.657311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.657329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.193 [2024-11-06 12:37:59.663281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.193 [2024-11-06 12:37:59.663494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.193 [2024-11-06 12:37:59.663512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.669947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.670172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.670190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.676632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.676852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.676869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.683911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.684142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.684160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.690685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.690901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.690919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.697609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.697828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.697846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.704209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.704432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.704450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.711206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.711374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.711393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.717751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.717954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.717972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.724403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.724608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.724626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.731075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.731294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.738219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.738424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.745241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.745449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.745473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.752214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.752397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.758224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.758416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.758434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.763690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.763902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.763920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.769201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.769400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.769419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.775391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.775604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.775622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.781801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.782013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.782031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.787113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.787328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.792413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.792623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.792642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.798308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.194 [2024-11-06 12:37:59.804165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.194 [2024-11-06 12:37:59.804384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.194 [2024-11-06 12:37:59.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.452 [2024-11-06 12:37:59.809708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.452 [2024-11-06 12:37:59.809914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.452 [2024-11-06 12:37:59.809933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.452 [2024-11-06 12:37:59.815481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.815671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.453 [2024-11-06 12:37:59.821677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.821876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.821894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.453 [2024-11-06 12:37:59.828107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.828312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.828331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.453 [2024-11-06 12:37:59.834262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.834487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.834505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.453 [2024-11-06 12:37:59.840527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.840727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.840746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.453 [2024-11-06 12:37:59.846476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ca4f0) with pdu=0x2000166ff3c8 00:31:28.453 [2024-11-06 12:37:59.846696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.453 [2024-11-06 12:37:59.846714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.453 4966.00 IOPS, 620.75 MiB/s 00:31:28.453 Latency(us) 00:31:28.453 [2024-11-06T11:38:00.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.453 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:28.453 nvme0n1 : 2.00 4964.91 620.61 0.00 0.00 3217.79 2263.97 14537.08 00:31:28.453 [2024-11-06T11:38:00.068Z] =================================================================================================================== 00:31:28.453 [2024-11-06T11:38:00.068Z] Total : 4964.91 620.61 0.00 0.00 3217.79 2263.97 14537.08 00:31:28.453 { 00:31:28.453 "results": [ 00:31:28.453 { 00:31:28.453 "job": "nvme0n1", 00:31:28.453 "core_mask": "0x2", 00:31:28.453 "workload": "randwrite", 00:31:28.453 "status": "finished", 00:31:28.453 "queue_depth": 16, 00:31:28.453 "io_size": 131072, 00:31:28.453 "runtime": 2.003662, 00:31:28.453 "iops": 4964.909251161124, 00:31:28.453 "mibps": 620.6136563951405, 00:31:28.453 "io_failed": 0, 00:31:28.453 "io_timeout": 0, 00:31:28.453 "avg_latency_us": 3217.794512556201, 00:31:28.453 "min_latency_us": 2263.970909090909, 00:31:28.453 "max_latency_us": 14537.076363636364 00:31:28.453 } 00:31:28.453 ], 00:31:28.453 "core_count": 1 00:31:28.453 } 00:31:28.453 12:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:28.453 12:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:28.453 12:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:28.453 | .driver_specific 00:31:28.453 | .nvme_error 00:31:28.453 | .status_code 00:31:28.453 | .command_transient_transport_error' 00:31:28.453 12:37:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 347462 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 347462 ']' 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 347462 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 347462 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 347462' 00:31:28.711 killing process with pid 347462 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 347462 00:31:28.711 Received shutdown signal, test time was about 2.000000 seconds 00:31:28.711 00:31:28.711 Latency(us) 00:31:28.711 [2024-11-06T11:38:00.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.711 [2024-11-06T11:38:00.326Z] =================================================================================================================== 00:31:28.711 [2024-11-06T11:38:00.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.711 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 347462 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 345327 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 345327 ']' 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 345327 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 345327 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 345327' 00:31:28.969 killing process with pid 345327 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 345327 00:31:28.969 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 345327 00:31:29.227 00:31:29.227 real 0m15.283s 00:31:29.227 user 0m31.026s 00:31:29.227 sys 0m4.270s 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:29.227 ************************************ 00:31:29.227 END TEST nvmf_digest_error 00:31:29.227 ************************************ 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.227 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.228 rmmod nvme_tcp 00:31:29.228 rmmod nvme_fabrics 00:31:29.228 rmmod nvme_keyring 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 345327 ']' 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 345327 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 345327 ']' 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 345327 00:31:29.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (345327) - No such process 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 345327 is not found' 00:31:29.228 Process with pid 345327 is not found 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.228 12:38:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.762 00:31:31.762 real 0m39.136s 00:31:31.762 user 1m4.436s 00:31:31.762 sys 0m13.039s 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.762 ************************************ 00:31:31.762 END TEST nvmf_digest 00:31:31.762 ************************************ 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.762 ************************************ 00:31:31.762 START TEST nvmf_bdevperf 00:31:31.762 ************************************ 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:31.762 * Looking for test storage... 00:31:31.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:31.762 12:38:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.762 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.763 --rc genhtml_branch_coverage=1 00:31:31.763 --rc genhtml_function_coverage=1 00:31:31.763 --rc genhtml_legend=1 00:31:31.763 --rc geninfo_all_blocks=1 00:31:31.763 --rc geninfo_unexecuted_blocks=1 00:31:31.763 00:31:31.763 ' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.763 --rc genhtml_branch_coverage=1 00:31:31.763 --rc genhtml_function_coverage=1 00:31:31.763 --rc genhtml_legend=1 00:31:31.763 --rc geninfo_all_blocks=1 00:31:31.763 --rc geninfo_unexecuted_blocks=1 00:31:31.763 00:31:31.763 ' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.763 --rc genhtml_branch_coverage=1 00:31:31.763 --rc genhtml_function_coverage=1 00:31:31.763 --rc genhtml_legend=1 00:31:31.763 --rc geninfo_all_blocks=1 00:31:31.763 --rc geninfo_unexecuted_blocks=1 00:31:31.763 00:31:31.763 ' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.763 --rc genhtml_branch_coverage=1 00:31:31.763 --rc genhtml_function_coverage=1 00:31:31.763 --rc genhtml_legend=1 00:31:31.763 --rc geninfo_all_blocks=1 00:31:31.763 --rc geninfo_unexecuted_blocks=1 00:31:31.763 00:31:31.763 ' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.763 12:38:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.030 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:37.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:37.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:37.031 Found net devices under 0000:af:00.0: cvl_0_0 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:37.031 Found net devices under 0000:af:00.1: cvl_0_1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:31:37.031 00:31:37.031 --- 10.0.0.2 ping statistics --- 00:31:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.031 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:31:37.031 00:31:37.031 --- 10.0.0.1 ping statistics --- 00:31:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.031 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=351659 00:31:37.031 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 351659 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 351659 ']' 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.032 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.032 [2024-11-06 12:38:08.463387] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:37.032 [2024-11-06 12:38:08.463443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.032 [2024-11-06 12:38:08.535580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:37.032 [2024-11-06 12:38:08.576695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.032 [2024-11-06 12:38:08.576728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.032 [2024-11-06 12:38:08.576735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.032 [2024-11-06 12:38:08.576740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.032 [2024-11-06 12:38:08.576745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.032 [2024-11-06 12:38:08.578034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.032 [2024-11-06 12:38:08.578136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.032 [2024-11-06 12:38:08.578138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 [2024-11-06 12:38:08.732769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 Malloc0 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 [2024-11-06 12:38:08.784534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.290 { 00:31:37.290 "params": { 00:31:37.290 "name": "Nvme$subsystem", 00:31:37.290 "trtype": "$TEST_TRANSPORT", 00:31:37.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.290 "adrfam": "ipv4", 00:31:37.290 "trsvcid": "$NVMF_PORT", 00:31:37.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.290 "hdgst": ${hdgst:-false}, 00:31:37.290 "ddgst": ${ddgst:-false} 00:31:37.290 }, 00:31:37.290 "method": "bdev_nvme_attach_controller" 00:31:37.290 } 00:31:37.290 EOF 00:31:37.290 )") 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:37.290 12:38:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.290 "params": { 00:31:37.290 "name": "Nvme1", 00:31:37.290 "trtype": "tcp", 00:31:37.290 "traddr": "10.0.0.2", 00:31:37.290 "adrfam": "ipv4", 00:31:37.290 "trsvcid": "4420", 00:31:37.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:37.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:37.290 "hdgst": false, 00:31:37.290 "ddgst": false 00:31:37.290 }, 00:31:37.290 "method": "bdev_nvme_attach_controller" 00:31:37.290 }' 00:31:37.290 [2024-11-06 12:38:08.841549] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:37.290 [2024-11-06 12:38:08.841604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351757 ] 00:31:37.550 [2024-11-06 12:38:08.937647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.550 [2024-11-06 12:38:08.986167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.808 Running I/O for 1 seconds... 00:31:38.742 10310.00 IOPS, 40.27 MiB/s 00:31:38.742 Latency(us) 00:31:38.742 [2024-11-06T11:38:10.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.742 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:38.742 Verification LBA range: start 0x0 length 0x4000 00:31:38.742 Nvme1n1 : 1.01 10373.98 40.52 0.00 0.00 12264.45 1697.98 12213.53 00:31:38.742 [2024-11-06T11:38:10.357Z] =================================================================================================================== 00:31:38.742 [2024-11-06T11:38:10.357Z] Total : 10373.98 40.52 0.00 0.00 12264.45 1697.98 12213.53 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=352019 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:39.000 { 00:31:39.000 "params": { 00:31:39.000 "name": "Nvme$subsystem", 00:31:39.000 "trtype": "$TEST_TRANSPORT", 00:31:39.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.000 "adrfam": "ipv4", 00:31:39.000 "trsvcid": "$NVMF_PORT", 00:31:39.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.000 "hdgst": ${hdgst:-false}, 00:31:39.000 "ddgst": ${ddgst:-false} 00:31:39.000 }, 00:31:39.000 "method": "bdev_nvme_attach_controller" 00:31:39.000 } 00:31:39.000 EOF 00:31:39.000 )") 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:39.000 12:38:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:39.000 "params": { 00:31:39.000 "name": "Nvme1", 00:31:39.000 "trtype": "tcp", 00:31:39.000 "traddr": "10.0.0.2", 00:31:39.000 "adrfam": "ipv4", 00:31:39.000 "trsvcid": "4420", 00:31:39.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:39.000 "hdgst": false, 00:31:39.000 "ddgst": false 00:31:39.000 }, 00:31:39.000 "method": "bdev_nvme_attach_controller" 00:31:39.000 }' 00:31:39.000 [2024-11-06 12:38:10.426874] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:39.000 [2024-11-06 12:38:10.426940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352019 ] 00:31:39.000 [2024-11-06 12:38:10.522597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.000 [2024-11-06 12:38:10.569260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.258 Running I/O for 15 seconds... 00:31:41.566 10676.00 IOPS, 41.70 MiB/s [2024-11-06T11:38:13.441Z] 10667.00 IOPS, 41.67 MiB/s [2024-11-06T11:38:13.442Z] 12:38:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 351659 00:31:41.827 12:38:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:41.827 [2024-11-06 12:38:13.389870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.389916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.389946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.389973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.389997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.827 [2024-11-06 12:38:13.390207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.827 [2024-11-06 12:38:13.390787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.827 [2024-11-06 12:38:13.390796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.390973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.390985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.390995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.828 [2024-11-06 12:38:13.391132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.828 [2024-11-06 12:38:13.391679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.828 [2024-11-06 12:38:13.391688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:41.829 [2024-11-06 12:38:13.391954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.391976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.391998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.829 [2024-11-06 12:38:13.392701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.829 [2024-11-06 12:38:13.392710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:41.830 [2024-11-06 12:38:13.392907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.392920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1611a60 is same with the state(6) to be set 00:31:41.830 [2024-11-06 12:38:13.392933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:41.830 [2024-11-06 12:38:13.392941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:41.830 [2024-11-06 12:38:13.392949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:31:41.830 [2024-11-06 12:38:13.392959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:41.830 [2024-11-06 12:38:13.397258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:41.830 [2024-11-06 12:38:13.397326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:41.830 [2024-11-06 12:38:13.398142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.830 [2024-11-06 12:38:13.398190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:41.830 [2024-11-06 12:38:13.398215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:41.830 [2024-11-06 12:38:13.398810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:41.830 [2024-11-06 12:38:13.399269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:41.830 [2024-11-06 12:38:13.399281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:41.830 [2024-11-06 12:38:13.399292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:41.830 [2024-11-06 12:38:13.399302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:41.830 [2024-11-06 12:38:13.412308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:41.830 [2024-11-06 12:38:13.412882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.830 [2024-11-06 12:38:13.412930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:41.830 [2024-11-06 12:38:13.412954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:41.830 [2024-11-06 12:38:13.413550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:41.830 [2024-11-06 12:38:13.414136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:41.830 [2024-11-06 12:38:13.414160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:41.830 [2024-11-06 12:38:13.414195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:41.830 [2024-11-06 12:38:13.414205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:41.830 [2024-11-06 12:38:13.426959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:41.830 [2024-11-06 12:38:13.427513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.830 [2024-11-06 12:38:13.427559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:41.830 [2024-11-06 12:38:13.427583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:41.830 [2024-11-06 12:38:13.428116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:41.830 [2024-11-06 12:38:13.428380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:41.830 [2024-11-06 12:38:13.428396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:41.830 [2024-11-06 12:38:13.428406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:41.830 [2024-11-06 12:38:13.428416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.441811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.442347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.442372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.442384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.442656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.442923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.442935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.442945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.442954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.456495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.457046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.457094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.457119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.457713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.458270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.458282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.458292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.458302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.471040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.471575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.471599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.471610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.471872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.472137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.472148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.472158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.472171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.485662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.486203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.486253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.486278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.486859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.487250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.487267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.487282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.487296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.500762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.501239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.501262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.501273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.501545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.501811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.501822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.501833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.501842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.515318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.515874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.515920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.515943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.516475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.516741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.516753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.516762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.516771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.530057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.530625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.530674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.530698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.531288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.531560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.531573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.531583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.531592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.544623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.545168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.545212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.545235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.545827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.546410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.546440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.546450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.546465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.559227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.559730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.559753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.090 [2024-11-06 12:38:13.559764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.090 [2024-11-06 12:38:13.560028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.090 [2024-11-06 12:38:13.560293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.090 [2024-11-06 12:38:13.560305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.090 [2024-11-06 12:38:13.560315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.090 [2024-11-06 12:38:13.560324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.090 [2024-11-06 12:38:13.573834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.090 [2024-11-06 12:38:13.574394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.090 [2024-11-06 12:38:13.574416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.574427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.574702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.574968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.574980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.574990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.574999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.588515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.589068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.589090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.589101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.589364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.589638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.589651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.589660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.589669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.603189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.603768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.603821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.603845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.604424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.604730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.604743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.604752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.604761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.617759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.618289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.618311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.618322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.618593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.618858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.618874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.618884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.618893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.632388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.632925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.632948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.632959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.633223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.633494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.633507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.633517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.633526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.647008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.647577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.647623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.647646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.648226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.648671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.648684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.648694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.648702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.662133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.662697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.662767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.663345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.663766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.663778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.663789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.663802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.676800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.677358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.677402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.677425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.677930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.678195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.678208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.678218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.678227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.091 [2024-11-06 12:38:13.691481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.091 [2024-11-06 12:38:13.692007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.091 [2024-11-06 12:38:13.692029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.091 [2024-11-06 12:38:13.692040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.091 [2024-11-06 12:38:13.692303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.091 [2024-11-06 12:38:13.692575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.091 [2024-11-06 12:38:13.692587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.091 [2024-11-06 12:38:13.692597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.091 [2024-11-06 12:38:13.692606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.350 [2024-11-06 12:38:13.706259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.350 [2024-11-06 12:38:13.706845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.350 [2024-11-06 12:38:13.706894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.350 [2024-11-06 12:38:13.706920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.350 [2024-11-06 12:38:13.707516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.350 [2024-11-06 12:38:13.708013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.350 [2024-11-06 12:38:13.708024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.350 [2024-11-06 12:38:13.708035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.350 [2024-11-06 12:38:13.708044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.350 [2024-11-06 12:38:13.720811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.350 [2024-11-06 12:38:13.721397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.350 [2024-11-06 12:38:13.721445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.350 [2024-11-06 12:38:13.721487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.350 [2024-11-06 12:38:13.721968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.350 [2024-11-06 12:38:13.722233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.350 [2024-11-06 12:38:13.722245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.350 [2024-11-06 12:38:13.722255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.350 [2024-11-06 12:38:13.722264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.350 [2024-11-06 12:38:13.735527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.350 [2024-11-06 12:38:13.736057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.350 [2024-11-06 12:38:13.736102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.350 [2024-11-06 12:38:13.736127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.350 [2024-11-06 12:38:13.736718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.350 [2024-11-06 12:38:13.736985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.350 [2024-11-06 12:38:13.736997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.350 [2024-11-06 12:38:13.737007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.350 [2024-11-06 12:38:13.737016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.350 [2024-11-06 12:38:13.750237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.350 [2024-11-06 12:38:13.750749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.350 [2024-11-06 12:38:13.750772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.350 [2024-11-06 12:38:13.750783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.350 [2024-11-06 12:38:13.751046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.350 [2024-11-06 12:38:13.751309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.350 [2024-11-06 12:38:13.751321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.350 [2024-11-06 12:38:13.751331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.350 [2024-11-06 12:38:13.751339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.350 [2024-11-06 12:38:13.764827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.350 [2024-11-06 12:38:13.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.350 [2024-11-06 12:38:13.765408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.350 [2024-11-06 12:38:13.765431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.350 [2024-11-06 12:38:13.765961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.766227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.766239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.766249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.766258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.779493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.780032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.780077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.780100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.780656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.781047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.781064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.781078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.781092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.794658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.795168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.795190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.795200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.795471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.795736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.795748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.795758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.795767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.809275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.809832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.809854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.809865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.810129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.810393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.810409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.810419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.810427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.823957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.824488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.824511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.824522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.824785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.825049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.825061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.825071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.825079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.838611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.839134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.839175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.839200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.839791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.840087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.840099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.840109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.840118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.853369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.853919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.853965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.853988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.854579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.855162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.855173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.855183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.855196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.867929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.868473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.868519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.868542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.869023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.869289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.869300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.869310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.869319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 8911.67 IOPS, 34.81 MiB/s [2024-11-06T11:38:13.966Z] [2024-11-06 12:38:13.883517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.884046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.884092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.884117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.884712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.885296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.885328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.885338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.885347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.898105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.898578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.898601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.898613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.898877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.351 [2024-11-06 12:38:13.899142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.351 [2024-11-06 12:38:13.899154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.351 [2024-11-06 12:38:13.899165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.351 [2024-11-06 12:38:13.899173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.351 [2024-11-06 12:38:13.912677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.351 [2024-11-06 12:38:13.913129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.351 [2024-11-06 12:38:13.913150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.351 [2024-11-06 12:38:13.913162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.351 [2024-11-06 12:38:13.913425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.352 [2024-11-06 12:38:13.913699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.352 [2024-11-06 12:38:13.913711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.352 [2024-11-06 12:38:13.913721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.352 [2024-11-06 12:38:13.913730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.352 [2024-11-06 12:38:13.927227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.352 [2024-11-06 12:38:13.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.352 [2024-11-06 12:38:13.927786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.352 [2024-11-06 12:38:13.927797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.352 [2024-11-06 12:38:13.928060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.352 [2024-11-06 12:38:13.928325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.352 [2024-11-06 12:38:13.928336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.352 [2024-11-06 12:38:13.928346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.352 [2024-11-06 12:38:13.928355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.352 [2024-11-06 12:38:13.941865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.352 [2024-11-06 12:38:13.942393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.352 [2024-11-06 12:38:13.942437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.352 [2024-11-06 12:38:13.942476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.352 [2024-11-06 12:38:13.943057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.352 [2024-11-06 12:38:13.943416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.352 [2024-11-06 12:38:13.943428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.352 [2024-11-06 12:38:13.943437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.352 [2024-11-06 12:38:13.943446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.352 [2024-11-06 12:38:13.956449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.352 [2024-11-06 12:38:13.956986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.352 [2024-11-06 12:38:13.957030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.352 [2024-11-06 12:38:13.957062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.352 [2024-11-06 12:38:13.957550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.352 [2024-11-06 12:38:13.957816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.352 [2024-11-06 12:38:13.957827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.352 [2024-11-06 12:38:13.957837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.352 [2024-11-06 12:38:13.957846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:13.971240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:13.971787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:13.971811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:13.971823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:13.972087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:13.972352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:13.972363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:13.972373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:13.972382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:13.985911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:13.986385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:13.986407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:13.986417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:13.986689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:13.986954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:13.986964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:13.986974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:13.986983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:14.000526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:14.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:14.001099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:14.001110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:14.001373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:14.001644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:14.001661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:14.001671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:14.001680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:14.015190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:14.015591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:14.015613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:14.015624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:14.015888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:14.016152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:14.016163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:14.016173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:14.016182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:14.029944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:14.030505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:14.030527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:14.030538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:14.030803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:14.031067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:14.031078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:14.031088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:14.031098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:14.044635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:14.045108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:14.045129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:14.045140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.610 [2024-11-06 12:38:14.045403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.610 [2024-11-06 12:38:14.045677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.610 [2024-11-06 12:38:14.045689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.610 [2024-11-06 12:38:14.045699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.610 [2024-11-06 12:38:14.045712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.610 [2024-11-06 12:38:14.059227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.610 [2024-11-06 12:38:14.059782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.610 [2024-11-06 12:38:14.059804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.610 [2024-11-06 12:38:14.059815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.060079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.060343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.060354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.060364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.060373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.073872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.074421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.074442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.074453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.074725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.074989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.075000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.075010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.075019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.088513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.089063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.089085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.089096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.089360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.089632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.089644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.089654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.089663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.103169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.103733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.103754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.103764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.104028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.104292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.104303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.104313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.104321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.117857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.118355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.118376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.118387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.118657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.118922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.118934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.118944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.118953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.132479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.133029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.133050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.133061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.133324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.133594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.133605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.133615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.133624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.147158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.147611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.147632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.147643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.147910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.148174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.148184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.148194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.148203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.161748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.162270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.162292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.162303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.162572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.162836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.162848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.162858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.162866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.176378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.176851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.176872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.176883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.177147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.177411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.177421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.177432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.177441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.190984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.191511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.191533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.191544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.611 [2024-11-06 12:38:14.191808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.611 [2024-11-06 12:38:14.192071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.611 [2024-11-06 12:38:14.192087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.611 [2024-11-06 12:38:14.192096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.611 [2024-11-06 12:38:14.192106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.611 [2024-11-06 12:38:14.205573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.611 [2024-11-06 12:38:14.206027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.611 [2024-11-06 12:38:14.206049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.611 [2024-11-06 12:38:14.206060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.612 [2024-11-06 12:38:14.206324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.612 [2024-11-06 12:38:14.206597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.612 [2024-11-06 12:38:14.206610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.612 [2024-11-06 12:38:14.206620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.612 [2024-11-06 12:38:14.206629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.612 [2024-11-06 12:38:14.220139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.612 [2024-11-06 12:38:14.220661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.612 [2024-11-06 12:38:14.220684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.612 [2024-11-06 12:38:14.220695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.612 [2024-11-06 12:38:14.220958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.612 [2024-11-06 12:38:14.221222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.612 [2024-11-06 12:38:14.221233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.612 [2024-11-06 12:38:14.221243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.612 [2024-11-06 12:38:14.221251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.870 [2024-11-06 12:38:14.234725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.870 [2024-11-06 12:38:14.235134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.235160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.235171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.235435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.235712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.235726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.235736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.235750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.249296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.249877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.249901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.249912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.250175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.250441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.250453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.250470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.250480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.263991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.264442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.264472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.264484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.264747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.265012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.265024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.265034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.265043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.278566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.279025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.279047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.279058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.279323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.279594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.279606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.279616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.279626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.293133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.293563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.293574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.293838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.294102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.294113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.294123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.294132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.307911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.308445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.308505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.308528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.309105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.309395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.309407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.309417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.309425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.322437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.322836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.322858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.322869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.323132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.323397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.323408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.323418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.323427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.337217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.337727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.337750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.337761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.338029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.338295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.338306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.338316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.338325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.351845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.352259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.352270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.352539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.352805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.352818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.352827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.871 [2024-11-06 12:38:14.352836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.871 [2024-11-06 12:38:14.366598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.871 [2024-11-06 12:38:14.366996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-11-06 12:38:14.367017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-11-06 12:38:14.367028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.871 [2024-11-06 12:38:14.367291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.871 [2024-11-06 12:38:14.367561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.871 [2024-11-06 12:38:14.367574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.871 [2024-11-06 12:38:14.367584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.367593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.381374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.381778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.381800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.381811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.382075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.382340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.382356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.382366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.382374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.396168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.396734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.396779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.396802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.397326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.397611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.397625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.397635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.397644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.410868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.411344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.411387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.411410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.412003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.412687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.412702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.412712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.412722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.425486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.425960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.425999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.426023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.426597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.426863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.426875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.426885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.426898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.440155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.440735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.440758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.441348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.441618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.441631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.441640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.441649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.454915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.455346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.455390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.455412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.456003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.456593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.456619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.456640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.456669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.469665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.470057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.470079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.470089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.470353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.470624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.470637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.470646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.470655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:42.872 [2024-11-06 12:38:14.484423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:42.872 [2024-11-06 12:38:14.484911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.872 [2024-11-06 12:38:14.484939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:42.872 [2024-11-06 12:38:14.484951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:42.872 [2024-11-06 12:38:14.485217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:42.872 [2024-11-06 12:38:14.485490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:42.872 [2024-11-06 12:38:14.485503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:42.872 [2024-11-06 12:38:14.485513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:42.872 [2024-11-06 12:38:14.485522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.499144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.499611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.499636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.499648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.499913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.500181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.500194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.500204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.131 [2024-11-06 12:38:14.500213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.513743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.514250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.514273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.514285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.514555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.514822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.514835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.514845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.131 [2024-11-06 12:38:14.514854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.528366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.528768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.528802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.529071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.529335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.529346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.529356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.131 [2024-11-06 12:38:14.529365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.543150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.543615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.543638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.543649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.543912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.544177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.544189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.544198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.131 [2024-11-06 12:38:14.544207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.557741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.558289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.558311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.558322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.558591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.558858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.558869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.558879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.131 [2024-11-06 12:38:14.558888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.131 [2024-11-06 12:38:14.572392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.131 [2024-11-06 12:38:14.572967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.131 [2024-11-06 12:38:14.573012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.131 [2024-11-06 12:38:14.573035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.131 [2024-11-06 12:38:14.573625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.131 [2024-11-06 12:38:14.573931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.131 [2024-11-06 12:38:14.573946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.131 [2024-11-06 12:38:14.573956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.573965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.586990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.587520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.587566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.587590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.588075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.588340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.588351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.588361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.588370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.601642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.602119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.602162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.602185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.602776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.603066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.603077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.603087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.603096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.616378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.616856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.616879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.616890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.617153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.617417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.617429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.617439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.617455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.630982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.631431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.631453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.631471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.631736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.632000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.632011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.632021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.632030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.645568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.646114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.646136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.646147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.646411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.646683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.646695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.646705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.646713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.660211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.660691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.660735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.660758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.661245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.661520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.661533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.661543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.661552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.674810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.675350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.675371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.675400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.675972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.676237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.676249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.676258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.676268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.689553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.690051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.690061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.690324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.690596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.690609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.690618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.690629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.704133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.704675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.704699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.704709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.704973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.705239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.705250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.705260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.705269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.718834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.719293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.719316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.719327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.719601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.719867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.719878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.719888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.719897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.132 [2024-11-06 12:38:14.733405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.132 [2024-11-06 12:38:14.733848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.132 [2024-11-06 12:38:14.733870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.132 [2024-11-06 12:38:14.733880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.132 [2024-11-06 12:38:14.734143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.132 [2024-11-06 12:38:14.734407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.132 [2024-11-06 12:38:14.734419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.132 [2024-11-06 12:38:14.734429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.132 [2024-11-06 12:38:14.734438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.390 [2024-11-06 12:38:14.748058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.390 [2024-11-06 12:38:14.748537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.390 [2024-11-06 12:38:14.748587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.390 [2024-11-06 12:38:14.748612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.390 [2024-11-06 12:38:14.749194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.390 [2024-11-06 12:38:14.749524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.390 [2024-11-06 12:38:14.749537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.390 [2024-11-06 12:38:14.749547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.390 [2024-11-06 12:38:14.749557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.390 [2024-11-06 12:38:14.762613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.390 [2024-11-06 12:38:14.763109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.390 [2024-11-06 12:38:14.763133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.763144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.763409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.763683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.763701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.763712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.763721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.777270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.777834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.777858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.777869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.778131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.778396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.778407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.778417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.778426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.791923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.792381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.792404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.792414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.792686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.792952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.792965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.792974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.792983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.806503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.807060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.807104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.807127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.807685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.807951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.807963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.807973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.807986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.821249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.821775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.821798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.821809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.822073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.822337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.822348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.822358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.822366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.835882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.836334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.836367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.836637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.836902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.836914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.836923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.836932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.850455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.850908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.850930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.850941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.851204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.851475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.851487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.851496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.851505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.865024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.865567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.865613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.865636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.866213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.866662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.866674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.866684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.866693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.881645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 6683.75 IOPS, 26.11 MiB/s [2024-11-06T11:38:15.006Z] [2024-11-06 12:38:14.882184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.882228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.882252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.882751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.883017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.883029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.883039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.883048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.896287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.896783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.896827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.896850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.897432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.897703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.897716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.897726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.897734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.910992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.911540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.911563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.911579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.911842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.912108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.912119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.912128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.912137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.925645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.926170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.926193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.926203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.926475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.926741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.926753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.926762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.926771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.940269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.940823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.940845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.940856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.941119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.941383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.941395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.941405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.941414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.954904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.955429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.955451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.955469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.955732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.956001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.956013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.956023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.956032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.969519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.970047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.970092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.970115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.970708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.971004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.971015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.971025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.971034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.984299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.984854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.391 [2024-11-06 12:38:14.984876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.391 [2024-11-06 12:38:14.984886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.391 [2024-11-06 12:38:14.985151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.391 [2024-11-06 12:38:14.985415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.391 [2024-11-06 12:38:14.985426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.391 [2024-11-06 12:38:14.985436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.391 [2024-11-06 12:38:14.985445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.391 [2024-11-06 12:38:14.998941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.391 [2024-11-06 12:38:14.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.392 [2024-11-06 12:38:14.999521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.392 [2024-11-06 12:38:14.999544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.392 [2024-11-06 12:38:15.000035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.392 [2024-11-06 12:38:15.000309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.392 [2024-11-06 12:38:15.000322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.392 [2024-11-06 12:38:15.000332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.392 [2024-11-06 12:38:15.000345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.013746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.014317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.014342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.014354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.014627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.014893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.014907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.014917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.014926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.028442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.028970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.028994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.029005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.029270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.029582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.029596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.029606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.029615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.043119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.043730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.043753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.044239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.044510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.044523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.044540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.044550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.057814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.058390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.058434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.058471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.059016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.059281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.059292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.059302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.059311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.072555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.073102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.073125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.073135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.073398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.073670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.073683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.073693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.073702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.087180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.087733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.087755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.087766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.088029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.088293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.088305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.088314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.088323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.101825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.102365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.102423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.102453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.103043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.103307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.103319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.103329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.103338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.651 [2024-11-06 12:38:15.116364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.651 [2024-11-06 12:38:15.116844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.651 [2024-11-06 12:38:15.116867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.651 [2024-11-06 12:38:15.116878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.651 [2024-11-06 12:38:15.117141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.651 [2024-11-06 12:38:15.117405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.651 [2024-11-06 12:38:15.117417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.651 [2024-11-06 12:38:15.117427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.651 [2024-11-06 12:38:15.117435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.130954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.131496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.131519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.131530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.131794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.132061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.132072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.132082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.132091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.145615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.146169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.146192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.146203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.146473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.146756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.146768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.146778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.146787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.160316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.160879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.160924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.160947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.161544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.161809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.161821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.161831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.161840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.175093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.175619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.175643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.175654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.175917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.176182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.176193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.176203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.176212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.189721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.190273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.190307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.190577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.190845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.190856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.190866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.190879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.204410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.204977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.205000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.205010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.205274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.205545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.205557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.205567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.205576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.219084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.219609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.219643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.219907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.220172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.220184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.220194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.220202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.233707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.234260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.234282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.234293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.234564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.234830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.234841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.234851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.234860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.248352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.248903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.248925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.248936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.249199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.249470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.249482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.249492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.652 [2024-11-06 12:38:15.249501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.652 [2024-11-06 12:38:15.262991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.652 [2024-11-06 12:38:15.263465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.652 [2024-11-06 12:38:15.263489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.652 [2024-11-06 12:38:15.263501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.652 [2024-11-06 12:38:15.263765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.652 [2024-11-06 12:38:15.264030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.652 [2024-11-06 12:38:15.264042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.652 [2024-11-06 12:38:15.264052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.653 [2024-11-06 12:38:15.264062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.911 [2024-11-06 12:38:15.277726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.911 [2024-11-06 12:38:15.278212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.911 [2024-11-06 12:38:15.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.911 [2024-11-06 12:38:15.278248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.911 [2024-11-06 12:38:15.278520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.911 [2024-11-06 12:38:15.278787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.911 [2024-11-06 12:38:15.278799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.911 [2024-11-06 12:38:15.278809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.911 [2024-11-06 12:38:15.278818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.911 [2024-11-06 12:38:15.292317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.911 [2024-11-06 12:38:15.292855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.911 [2024-11-06 12:38:15.292907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.911 [2024-11-06 12:38:15.292939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.911 [2024-11-06 12:38:15.293534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.911 [2024-11-06 12:38:15.293831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.911 [2024-11-06 12:38:15.293843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.911 [2024-11-06 12:38:15.293853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.911 [2024-11-06 12:38:15.293862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.911 [2024-11-06 12:38:15.306872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.911 [2024-11-06 12:38:15.307430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.911 [2024-11-06 12:38:15.307488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.307512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.308089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.308684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.308710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.308731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.308750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.321532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.322099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.322145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.322168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.322700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.322965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.322977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.322986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.322995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.336261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.336810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.336832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.336843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.337109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.337373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.337391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.337401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.337410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.350907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.351473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.351519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.351542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.352027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.352415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.352432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.352446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.352467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.365998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.366560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.366605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.366627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.367206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.367801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.367829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.367850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.367868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.380650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.381208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.381230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.381241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.381513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.381779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.381791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.381801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.381814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.395316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.395881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.395926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.395950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.396541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.397026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.397037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.397046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.397055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.410064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.410640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.410651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.410915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.411179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.411191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.411200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.411209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.424715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.425256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.425278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.425289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.912 [2024-11-06 12:38:15.425560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.912 [2024-11-06 12:38:15.425825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.912 [2024-11-06 12:38:15.425836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.912 [2024-11-06 12:38:15.425846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.912 [2024-11-06 12:38:15.425855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.912 [2024-11-06 12:38:15.439464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.912 [2024-11-06 12:38:15.440026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.912 [2024-11-06 12:38:15.440070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.912 [2024-11-06 12:38:15.440093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.440688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.441129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.441140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.441150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.441160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.913 [2024-11-06 12:38:15.454177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.913 [2024-11-06 12:38:15.454704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.913 [2024-11-06 12:38:15.454726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.913 [2024-11-06 12:38:15.454737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.455000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.455264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.455276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.455285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.455295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.913 [2024-11-06 12:38:15.468797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.913 [2024-11-06 12:38:15.469343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.913 [2024-11-06 12:38:15.469365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.913 [2024-11-06 12:38:15.469376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.469647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.469912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.469923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.469933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.469942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.913 [2024-11-06 12:38:15.483659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.913 [2024-11-06 12:38:15.484211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.913 [2024-11-06 12:38:15.484233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.913 [2024-11-06 12:38:15.484249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.484520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.484787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.484805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.484815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.484825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.913 [2024-11-06 12:38:15.498326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.913 [2024-11-06 12:38:15.498793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.913 [2024-11-06 12:38:15.498853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.913 [2024-11-06 12:38:15.498877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.499397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.499669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.499682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.499692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.499701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:43.913 [2024-11-06 12:38:15.512980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:43.913 [2024-11-06 12:38:15.513539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.913 [2024-11-06 12:38:15.513586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:43.913 [2024-11-06 12:38:15.513610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:43.913 [2024-11-06 12:38:15.514163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:43.913 [2024-11-06 12:38:15.514427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:43.913 [2024-11-06 12:38:15.514440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:43.913 [2024-11-06 12:38:15.514450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:43.913 [2024-11-06 12:38:15.514467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.171 [2024-11-06 12:38:15.528214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.171 [2024-11-06 12:38:15.528773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.171 [2024-11-06 12:38:15.528797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.171 [2024-11-06 12:38:15.528810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.529074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.529339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.529355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.529365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.529375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.542944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.543503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.543527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.543539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.543803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.544068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.544080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.544090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.544100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.557592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.558136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.558159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.558170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.558433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.558704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.558717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.558727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.558736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.572237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.572714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.572747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.573010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.573275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.573286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.573296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.573310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.586797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.587273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.587318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.587341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.587935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.588481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.588493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.588503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.588512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.601497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.602064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.602087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.602098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.602361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.602641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.602654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.602664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.602673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.616183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.616707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.616730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.616741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.617004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.617268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.617280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.617289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.617298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.630810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.631280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.631303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.631314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.631585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.631852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.631864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.631874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.631884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.645417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.645973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.645996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.646008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.646271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.646540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.646553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.646563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.646571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.660079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.660589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.660612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.660623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.660886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.661151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.661163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.661173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.661181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.674697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.675222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.675243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.675258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.675529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.675796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.675807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.675817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.675826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.689324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.689838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.689861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.689873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.690137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.690401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.690413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.690423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.690431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.704016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.704493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.704516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.704527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.704790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.705055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.705066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.705076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.705085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.718615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.719158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.719180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.719191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.719454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.719728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.719745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.719755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.719764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.733317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.733903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.733926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.733937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.734200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.734473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.734486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.734496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.734506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.172 [2024-11-06 12:38:15.748026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.172 [2024-11-06 12:38:15.748556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.172 [2024-11-06 12:38:15.748579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.172 [2024-11-06 12:38:15.748590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.172 [2024-11-06 12:38:15.748853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.172 [2024-11-06 12:38:15.749118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.172 [2024-11-06 12:38:15.749130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.172 [2024-11-06 12:38:15.749140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.172 [2024-11-06 12:38:15.749148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.173 [2024-11-06 12:38:15.762674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.173 [2024-11-06 12:38:15.763228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.173 [2024-11-06 12:38:15.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.173 [2024-11-06 12:38:15.763261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.173 [2024-11-06 12:38:15.763532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.173 [2024-11-06 12:38:15.763799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.173 [2024-11-06 12:38:15.763811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.173 [2024-11-06 12:38:15.763820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.173 [2024-11-06 12:38:15.763834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.173 [2024-11-06 12:38:15.777339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.173 [2024-11-06 12:38:15.777906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.173 [2024-11-06 12:38:15.777929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.173 [2024-11-06 12:38:15.777941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.173 [2024-11-06 12:38:15.778206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.173 [2024-11-06 12:38:15.778477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.173 [2024-11-06 12:38:15.778490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.173 [2024-11-06 12:38:15.778500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.173 [2024-11-06 12:38:15.778510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.429 [2024-11-06 12:38:15.792198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.429 [2024-11-06 12:38:15.792758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.429 [2024-11-06 12:38:15.792808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.429 [2024-11-06 12:38:15.792833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.429 [2024-11-06 12:38:15.793413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.429 [2024-11-06 12:38:15.794009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.429 [2024-11-06 12:38:15.794037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.429 [2024-11-06 12:38:15.794047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.429 [2024-11-06 12:38:15.794057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.429 [2024-11-06 12:38:15.806847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.429 [2024-11-06 12:38:15.807406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.429 [2024-11-06 12:38:15.807428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.429 [2024-11-06 12:38:15.807439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.807713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.807979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.807991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.808001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.808010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.821510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.821906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.821928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.821939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.822203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.822479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.822492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.822502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.822511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.836268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.836696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.836707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.836971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.837235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.837247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.837257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.837266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.851028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.851587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.851611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.851622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.851886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.852151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.852163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.852173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.852181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.865723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.866223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.866245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.866256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.866535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.866802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.866815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.866825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.866833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.880351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.880840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.880862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.880873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.881136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.881400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.881411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.881421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.881430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 5347.00 IOPS, 20.89 MiB/s [2024-11-06T11:38:16.045Z] [2024-11-06 12:38:15.895131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.895644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.895691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.895714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.896256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.896525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.896538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.896548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.896558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.909839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.910403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.910448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.910486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.911065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.911373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.911385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.911395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.911404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.924436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.924964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.924987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.924998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.925261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.925535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.925548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.925557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.925566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.939116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.939578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.939601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.939612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.939875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.940139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.940150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.940160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.940170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.953690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.954228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.954272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.954295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.954887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.955405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.955417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.955432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.955441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.968456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.969000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.969022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.969033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.969296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.969568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.969581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.969591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.969600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.983129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.983627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.983650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.983661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.983924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.984189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.984201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.984210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.984219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:15.997769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:15.998310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:15.998332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:15.998343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:15.998614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:15.998880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:15.998892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:15.998902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:15.998911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:16.012429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:16.012918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:16.012940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:16.012952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.430 [2024-11-06 12:38:16.013214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.430 [2024-11-06 12:38:16.013488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.430 [2024-11-06 12:38:16.013500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.430 [2024-11-06 12:38:16.013510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.430 [2024-11-06 12:38:16.013519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.430 [2024-11-06 12:38:16.027041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.430 [2024-11-06 12:38:16.027571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.430 [2024-11-06 12:38:16.027623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.430 [2024-11-06 12:38:16.027647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.431 [2024-11-06 12:38:16.028225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.431 [2024-11-06 12:38:16.028566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.431 [2024-11-06 12:38:16.028579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.431 [2024-11-06 12:38:16.028589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.431 [2024-11-06 12:38:16.028598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.431 [2024-11-06 12:38:16.041647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.431 [2024-11-06 12:38:16.042099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.431 [2024-11-06 12:38:16.042121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.431 [2024-11-06 12:38:16.042132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.431 [2024-11-06 12:38:16.042396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.431 [2024-11-06 12:38:16.042669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.431 [2024-11-06 12:38:16.042682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.431 [2024-11-06 12:38:16.042692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.431 [2024-11-06 12:38:16.042701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.056409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.056872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.056897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.056917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.057183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.057448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.057469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.057480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.057489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.071021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.071551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.071597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.071621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.072200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.072580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.072593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.072603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.072612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.086188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.086607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.086630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.086641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.086904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.087169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.087180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.087190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.087199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.100749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.101260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.101327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.101831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.102101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.102113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.102123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.102132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.115426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.115909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.115932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.115943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.116207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.116479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.116491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.116501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.116509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.689 [2024-11-06 12:38:16.130053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.689 [2024-11-06 12:38:16.130605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.689 [2024-11-06 12:38:16.130651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.689 [2024-11-06 12:38:16.130674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.689 [2024-11-06 12:38:16.131252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.689 [2024-11-06 12:38:16.131649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.689 [2024-11-06 12:38:16.131662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.689 [2024-11-06 12:38:16.131672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.689 [2024-11-06 12:38:16.131680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.144712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.145190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.145212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.145223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.145493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.145758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.145769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.145779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.145792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.159293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.159857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.159902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.159926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.160518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.160831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.160842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.160852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.160861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.173866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.174396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.174429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.174701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.174968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.174979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.174989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.174999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.188416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.188931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.188986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.189010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.189603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.189928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.189939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.189949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.189958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.202946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.203487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.203532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.203555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.204040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.204304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.204316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.204326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.204335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.217577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.218123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.218168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.218191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.218780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.219045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.219057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.219067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.219076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.232311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.232852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.232875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.232886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.233150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.233415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.233426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.233436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.233445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.246948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.247452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.247481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.247496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.247760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.248025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.248036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.248046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.248054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.261531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.262059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.262105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.262130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.262722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.263005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.263016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.690 [2024-11-06 12:38:16.263026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.690 [2024-11-06 12:38:16.263035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.690 [2024-11-06 12:38:16.276293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.690 [2024-11-06 12:38:16.276847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.690 [2024-11-06 12:38:16.276869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.690 [2024-11-06 12:38:16.276880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.690 [2024-11-06 12:38:16.277143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.690 [2024-11-06 12:38:16.277407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.690 [2024-11-06 12:38:16.277419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.691 [2024-11-06 12:38:16.277429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.691 [2024-11-06 12:38:16.277437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.691 [2024-11-06 12:38:16.290940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.691 [2024-11-06 12:38:16.291493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.691 [2024-11-06 12:38:16.291516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.691 [2024-11-06 12:38:16.291528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.691 [2024-11-06 12:38:16.291791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.691 [2024-11-06 12:38:16.292060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.691 [2024-11-06 12:38:16.292072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.691 [2024-11-06 12:38:16.292082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.691 [2024-11-06 12:38:16.292091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.305753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.306239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.306279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.306305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.306898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.307271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.307283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.307293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.307302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.320377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.320927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.320952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.320964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.321227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.321500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.321513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.321523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.321532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.335037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.335561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.335585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.335595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.335859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.336124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.336136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.336146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.336159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.349663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.350196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.350240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.350263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.350856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.351380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.351392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.351402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.351411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.364427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.364961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.365009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.365034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.365624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.366209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.366234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.366255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.366274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 [2024-11-06 12:38:16.379729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.950 [2024-11-06 12:38:16.380229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.950 [2024-11-06 12:38:16.380251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.950 [2024-11-06 12:38:16.380262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.950 [2024-11-06 12:38:16.380532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.950 [2024-11-06 12:38:16.380798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.950 [2024-11-06 12:38:16.380810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.950 [2024-11-06 12:38:16.380819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.950 [2024-11-06 12:38:16.380828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 351659 Killed "${NVMF_APP[@]}" "$@" 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=353072 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 353072 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 353072 ']' 00:31:44.951 [2024-11-06 12:38:16.394339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:44.951 [2024-11-06 12:38:16.394852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.394875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.394885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.951 [2024-11-06 12:38:16.395148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:44.951 [2024-11-06 12:38:16.395414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.395426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.395436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.395446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:44.951 [2024-11-06 12:38:16.408978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.409511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.409534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.409544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.409808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.410071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.410083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.410093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.410101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.423606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.424135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.424157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.424168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.424432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.424704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.424716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.424727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.424736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.438272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.438823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.438846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.438857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.439120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.439385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.439396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.439406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.439415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.450725] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:31:44.951 [2024-11-06 12:38:16.450777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.951 [2024-11-06 12:38:16.452946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.453500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.453523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.453534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.453798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.454063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.454075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.454085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.454095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.467705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.468064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.468087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.468098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.468362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.468632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.468645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.468655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.468664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.482652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.483123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.483145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.483157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.483421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.483691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.483704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.483714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.483724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.497252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.497795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.497818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.497830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.951 [2024-11-06 12:38:16.498093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.951 [2024-11-06 12:38:16.498357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.951 [2024-11-06 12:38:16.498369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.951 [2024-11-06 12:38:16.498379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.951 [2024-11-06 12:38:16.498388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.951 [2024-11-06 12:38:16.511915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.951 [2024-11-06 12:38:16.512439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.951 [2024-11-06 12:38:16.512474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.951 [2024-11-06 12:38:16.512486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.952 [2024-11-06 12:38:16.512750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.952 [2024-11-06 12:38:16.513015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.952 [2024-11-06 12:38:16.513026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.952 [2024-11-06 12:38:16.513036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.952 [2024-11-06 12:38:16.513045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.952 [2024-11-06 12:38:16.523358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.952 [2024-11-06 12:38:16.526563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.952 [2024-11-06 12:38:16.527110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.952 [2024-11-06 12:38:16.527133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.952 [2024-11-06 12:38:16.527143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.952 [2024-11-06 12:38:16.527408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.952 [2024-11-06 12:38:16.527681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.952 [2024-11-06 12:38:16.527693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.952 [2024-11-06 12:38:16.527703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.952 [2024-11-06 12:38:16.527712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.952 [2024-11-06 12:38:16.541255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.952 [2024-11-06 12:38:16.541795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.952 [2024-11-06 12:38:16.541819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.952 [2024-11-06 12:38:16.541830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.952 [2024-11-06 12:38:16.542093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.952 [2024-11-06 12:38:16.542359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.952 [2024-11-06 12:38:16.542371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.952 [2024-11-06 12:38:16.542381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.952 [2024-11-06 12:38:16.542390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.952 [2024-11-06 12:38:16.555918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:44.952 [2024-11-06 12:38:16.556457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.952 [2024-11-06 12:38:16.556485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:44.952 [2024-11-06 12:38:16.556496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:44.952 [2024-11-06 12:38:16.556765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:44.952 [2024-11-06 12:38:16.557038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:44.952 [2024-11-06 12:38:16.557050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:44.952 [2024-11-06 12:38:16.557060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:44.952 [2024-11-06 12:38:16.557071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:44.952 [2024-11-06 12:38:16.561466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.952 [2024-11-06 12:38:16.561488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.952 [2024-11-06 12:38:16.561495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.952 [2024-11-06 12:38:16.561501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.952 [2024-11-06 12:38:16.561506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.952 [2024-11-06 12:38:16.562823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.952 [2024-11-06 12:38:16.562916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.952 [2024-11-06 12:38:16.562926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.211 [2024-11-06 12:38:16.570516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.571104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.571132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.571152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.571445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.571730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.571745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.571756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.571767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.585284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.585851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.585879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.585892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.586159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.586426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.586438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.586449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.586474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.599985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.600559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.600587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.600601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.600867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.601135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.601147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.601157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.601167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.614715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.615288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.615314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.615328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.615600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.615867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.615880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.615891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.615901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.629406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.629943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.629967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.629979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.630243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.630514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.630527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.630538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.630547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.644058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.644636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.644665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.644676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.644941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.645206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.645218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.645228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.645237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.658732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.659260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.659283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.659294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.659563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.659827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.659840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.659850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.659859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.673343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.673897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.673919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.673931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.674194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.674465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.674477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.674486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.674495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:45.211 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:31:45.211 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.211 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:45.211 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.211 [2024-11-06 12:38:16.688010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.688557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.688581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.688592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.688856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.689120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.689133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.689143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.689152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.702700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.703263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.211 [2024-11-06 12:38:16.703285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.211 [2024-11-06 12:38:16.703296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.211 [2024-11-06 12:38:16.703566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.211 [2024-11-06 12:38:16.703833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.211 [2024-11-06 12:38:16.703845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.211 [2024-11-06 12:38:16.703855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.211 [2024-11-06 12:38:16.703864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.211 [2024-11-06 12:38:16.717386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.211 [2024-11-06 12:38:16.717867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.212 [2024-11-06 12:38:16.717889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.212 [2024-11-06 12:38:16.717900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.212 [2024-11-06 12:38:16.718163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.212 [2024-11-06 12:38:16.718427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.212 [2024-11-06 12:38:16.718439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.212 [2024-11-06 12:38:16.718449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.212 [2024-11-06 12:38:16.718463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 [2024-11-06 12:38:16.728232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.212 [2024-11-06 12:38:16.731952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.212 [2024-11-06 12:38:16.732495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.212 [2024-11-06 12:38:16.732518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.212 [2024-11-06 12:38:16.732529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.212 [2024-11-06 12:38:16.732793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.212 [2024-11-06 12:38:16.733057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.212 [2024-11-06 12:38:16.733069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.212 [2024-11-06 12:38:16.733079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.212 [2024-11-06 12:38:16.733087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 [2024-11-06 12:38:16.746610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.212 [2024-11-06 12:38:16.747089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.212 [2024-11-06 12:38:16.747111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.212 [2024-11-06 12:38:16.747122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.212 [2024-11-06 12:38:16.747386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.212 [2024-11-06 12:38:16.747656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.212 [2024-11-06 12:38:16.747668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.212 [2024-11-06 12:38:16.747678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.212 [2024-11-06 12:38:16.747687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.212 [2024-11-06 12:38:16.761176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.212 [2024-11-06 12:38:16.761649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.212 [2024-11-06 12:38:16.761672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.212 [2024-11-06 12:38:16.761683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.212 [2024-11-06 12:38:16.761947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.212 [2024-11-06 12:38:16.762211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.212 [2024-11-06 12:38:16.762223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.212 [2024-11-06 12:38:16.762239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.212 [2024-11-06 12:38:16.762249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.212 Malloc0 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 [2024-11-06 12:38:16.775746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.212 [2024-11-06 12:38:16.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.212 [2024-11-06 12:38:16.776316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ba40 with addr=10.0.0.2, port=4420 00:31:45.212 [2024-11-06 12:38:16.776327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161ba40 is same with the state(6) to be set 00:31:45.212 [2024-11-06 12:38:16.776595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161ba40 (9): Bad file descriptor 00:31:45.212 [2024-11-06 12:38:16.776859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.212 [2024-11-06 12:38:16.776871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.212 [2024-11-06 12:38:16.776881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.212 [2024-11-06 12:38:16.776890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 [2024-11-06 12:38:16.784606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.212 12:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 352019 00:31:45.212 [2024-11-06 12:38:16.790381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.470 4455.83 IOPS, 17.41 MiB/s [2024-11-06T11:38:17.085Z] [2024-11-06 12:38:16.905957] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:47.332 5237.57 IOPS, 20.46 MiB/s [2024-11-06T11:38:20.316Z] 5903.50 IOPS, 23.06 MiB/s [2024-11-06T11:38:21.248Z] 6413.44 IOPS, 25.05 MiB/s [2024-11-06T11:38:22.182Z] 6786.90 IOPS, 26.51 MiB/s [2024-11-06T11:38:23.113Z] 7126.45 IOPS, 27.84 MiB/s [2024-11-06T11:38:24.047Z] 7406.50 IOPS, 28.93 MiB/s [2024-11-06T11:38:24.982Z] 7705.62 IOPS, 30.10 MiB/s [2024-11-06T11:38:25.915Z] 7903.07 IOPS, 30.87 MiB/s 00:31:54.300 Latency(us) 00:31:54.300 [2024-11-06T11:38:25.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:54.300 Verification LBA range: start 0x0 length 0x4000 00:31:54.300 Nvme1n1 : 15.01 8059.95 31.48 7149.32 0.00 8385.36 644.19 14358.34 00:31:54.300 [2024-11-06T11:38:25.915Z] =================================================================================================================== 00:31:54.300 [2024-11-06T11:38:25.915Z] Total : 8059.95 31.48 7149.32 0.00 8385.36 644.19 14358.34 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.557 rmmod nvme_tcp 00:31:54.557 rmmod nvme_fabrics 00:31:54.557 rmmod nvme_keyring 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 353072 ']' 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 353072 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 353072 ']' 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 353072 00:31:54.557 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 353072 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 353072' 00:31:54.815 killing process with pid 353072 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 353072 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 353072 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.815 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.816 12:38:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.347 00:31:57.347 real 0m25.636s 00:31:57.347 user 1m1.298s 00:31:57.347 sys 0m6.240s 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:57.347 ************************************ 00:31:57.347 END TEST nvmf_bdevperf 00:31:57.347 ************************************ 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.347 ************************************ 00:31:57.347 START TEST nvmf_target_disconnect 00:31:57.347 ************************************ 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:57.347 * Looking for test storage... 00:31:57.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.347 --rc genhtml_branch_coverage=1 00:31:57.347 --rc genhtml_function_coverage=1 00:31:57.347 --rc genhtml_legend=1 00:31:57.347 --rc geninfo_all_blocks=1 00:31:57.347 --rc geninfo_unexecuted_blocks=1 00:31:57.347 00:31:57.347 ' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.347 --rc genhtml_branch_coverage=1 00:31:57.347 --rc genhtml_function_coverage=1 00:31:57.347 --rc genhtml_legend=1 00:31:57.347 --rc geninfo_all_blocks=1 00:31:57.347 --rc geninfo_unexecuted_blocks=1 00:31:57.347 00:31:57.347 ' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.347 --rc genhtml_branch_coverage=1 00:31:57.347 --rc genhtml_function_coverage=1 00:31:57.347 --rc genhtml_legend=1 00:31:57.347 --rc geninfo_all_blocks=1 00:31:57.347 --rc geninfo_unexecuted_blocks=1 00:31:57.347 00:31:57.347 ' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.347 --rc genhtml_branch_coverage=1 00:31:57.347 --rc genhtml_function_coverage=1 00:31:57.347 --rc genhtml_legend=1 00:31:57.347 --rc geninfo_all_blocks=1 00:31:57.347 --rc geninfo_unexecuted_blocks=1 00:31:57.347 00:31:57.347 ' 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.347 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.348 12:38:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:03.908 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.908 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:03.909 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:03.909 Found net devices under 0000:af:00.0: cvl_0_0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:03.909 Found net devices under 0000:af:00.1: cvl_0_1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:32:03.909 00:32:03.909 --- 10.0.0.2 ping statistics --- 00:32:03.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.909 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:32:03.909 00:32:03.909 --- 10.0.0.1 ping statistics --- 00:32:03.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.909 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:03.909 ************************************ 00:32:03.909 START TEST nvmf_target_disconnect_tc1 00:32:03.909 ************************************ 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:03.909 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.910 [2024-11-06 12:38:34.747072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.910 [2024-11-06 12:38:34.747173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d4460 with addr=10.0.0.2, port=4420 00:32:03.910 [2024-11-06 12:38:34.747217] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:03.910 [2024-11-06 12:38:34.747244] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:03.910 [2024-11-06 12:38:34.747264] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:03.910 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:03.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:03.910 Initializing NVMe Controllers 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:03.910 00:32:03.910 real 0m0.139s 00:32:03.910 user 0m0.066s 00:32:03.910 sys 0m0.073s 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 ************************************ 00:32:03.910 END TEST nvmf_target_disconnect_tc1 00:32:03.910 ************************************ 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 ************************************ 00:32:03.910 START TEST nvmf_target_disconnect_tc2 00:32:03.910 ************************************ 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=358414 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 358414 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 358414 ']' 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:03.910 12:38:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 [2024-11-06 12:38:34.896136] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:32:03.910 [2024-11-06 12:38:34.896190] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.910 [2024-11-06 12:38:34.967002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.910 [2024-11-06 12:38:35.007144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.910 [2024-11-06 12:38:35.007179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.910 [2024-11-06 12:38:35.007186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.910 [2024-11-06 12:38:35.007191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.910 [2024-11-06 12:38:35.007196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.910 [2024-11-06 12:38:35.008870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:03.910 [2024-11-06 12:38:35.008983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:03.910 [2024-11-06 12:38:35.009093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:03.910 [2024-11-06 12:38:35.009094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 Malloc0 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 [2024-11-06 12:38:35.200705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.910 [2024-11-06 12:38:35.232961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.910 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=358438 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:03.911 12:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:05.960 12:38:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 358414 00:32:05.960 12:38:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 [2024-11-06 12:38:37.261628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 [2024-11-06 12:38:37.261924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Write completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.960 starting I/O failed 00:32:05.960 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 [2024-11-06 12:38:37.262122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Write completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 Read completed with error (sct=0, sc=8) 00:32:05.961 starting I/O failed 00:32:05.961 [2024-11-06 12:38:37.262300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:05.961 [2024-11-06 12:38:37.262435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.262455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.262666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.262703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.262927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.262959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.263150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.263182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.263327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.263359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.263631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.263665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.263778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.263809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.264012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.264042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.264183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.264214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.264426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.264466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.264700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.264991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.265848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.265969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.266140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.266324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.266437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.266612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.961 [2024-11-06 12:38:37.266768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.961 [2024-11-06 12:38:37.266799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.961 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.267007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.267038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.267206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.267238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.267352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.267382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.267514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.267555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.267837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.267869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.268153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.268185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.268296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.268326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.268527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.268561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.268808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.268838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.269143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.269428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.269593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.269798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.269984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.269992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.270969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.270978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.271847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.271856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.272009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.272019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.272213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.272244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.272373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.272404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.272551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.272583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.272781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.272811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.273011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.273042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.273260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.273472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.962 [2024-11-06 12:38:37.273482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.962 qpair failed and we were unable to recover it. 00:32:05.962 [2024-11-06 12:38:37.273578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.273587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.273672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.273681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.273759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.273768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.273846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.273889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.274046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.274266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.274487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.274643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.274791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.274981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.275965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.275995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.276950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.276981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.277988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.277997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.278075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.278084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.278149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.278157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.278360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.278369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.278468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.963 [2024-11-06 12:38:37.278501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.963 qpair failed and we were unable to recover it. 00:32:05.963 [2024-11-06 12:38:37.278683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.278714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.278825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.278856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.279015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.279045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.279244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.279253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.279416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.279639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.279671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.279921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.279951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.280917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.280947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.281153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.281185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.281418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.281447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.281605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.281637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.281779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.281897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.282944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.282973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.283863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.283985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.284015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.284193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.964 [2024-11-06 12:38:37.284202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.964 qpair failed and we were unable to recover it. 00:32:05.964 [2024-11-06 12:38:37.284278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.284288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.284367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.284399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.284565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.284694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.284725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.284910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.284942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.285939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.285965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.286170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.286179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.286316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.286325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.286486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.286525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.286638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.286670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.286791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.286823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.287868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.287899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.288123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.288285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.288540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.288716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.288866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.288984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.289217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.289374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.289586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.289671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.289746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.289776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.965 qpair failed and we were unable to recover it. 00:32:05.965 [2024-11-06 12:38:37.290829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.965 [2024-11-06 12:38:37.290838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.290916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.290925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.291182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.291259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.291422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.291476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.291653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.291764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.291795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.291936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.291969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.292105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.292136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.292340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.292372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.292500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.292535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.292790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.292821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.292945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.292977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.293164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.293196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.293473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.293505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.293691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.293857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.293898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.294986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.294995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.295059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.295069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.295283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.295315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.295569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.295602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.295783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.296918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.296950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.297236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.297269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.297533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.297736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.297769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.297962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.297993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.966 [2024-11-06 12:38:37.298108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.966 [2024-11-06 12:38:37.298140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.966 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.298229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.298239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.298309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.298318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.298483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.298517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.298695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.298777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.299050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.299127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.299376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.299412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.299586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.299620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.299754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.299786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.299979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.300210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.300243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.300359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.300395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.300624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.300659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.301833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.301978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.302887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.302919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.303964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.303996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.304271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.304303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.304548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.304558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.304623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.304791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.304824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.967 [2024-11-06 12:38:37.305027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.967 [2024-11-06 12:38:37.305060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.967 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.305253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.305285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.305481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.305643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.305675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.305804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.305836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.305958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.305990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.306114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.306152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.306276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.306307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.306493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.306503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.306660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.306820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.306851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.307925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.307956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.308944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.308953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.309818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.309973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.310005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.310183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.310214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.310412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.310444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.310626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.310882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.310911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.311101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.311133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.311244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.311276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.311399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.968 [2024-11-06 12:38:37.311430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.968 qpair failed and we were unable to recover it. 00:32:05.968 [2024-11-06 12:38:37.311629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.311664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.311856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.311887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.312139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.312170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.312304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.312540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.312573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.312691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.312723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.312914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.312925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.313944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.313953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.314032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.314041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.314189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.314199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.314369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.314651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.314683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.314866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.314899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.315961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.315994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.316115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.316146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.316360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.316393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.316531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.316564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.316676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.316708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.317042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.317075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.317254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.317286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.317395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.317426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.317630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.317640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.317793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.317825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.318008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.318040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.969 [2024-11-06 12:38:37.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.969 [2024-11-06 12:38:37.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.969 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.318336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.318430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.318511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.318600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.318845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.318975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.319904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.319936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.320147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.320191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.320379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.320388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.320527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.320772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.320803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.320999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.321031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.321336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.321368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.321673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.321706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.321887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.321919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.322098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.322130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.322313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.322344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.322523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.322557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.322807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.322839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.322966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.322997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.323128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.323160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.323279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.323289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.323502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.323536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.323718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.324029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.324060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.324232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.324241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.324345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.324355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.324512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.324546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.324819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.325070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.325224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.325256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.325385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.970 [2024-11-06 12:38:37.325417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.970 qpair failed and we were unable to recover it. 00:32:05.970 [2024-11-06 12:38:37.325625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.325659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.325776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.325785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.325945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.325955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.326168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.326200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.326394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.326426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.326636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.326646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.326793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.326802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.327033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.327186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.327195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.327392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.327535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.327569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.327709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.327742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.328021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.328052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.328333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.328366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.328567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.328600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.328742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.328780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.329104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.329387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.329419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.329646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.329656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.329767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.330104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.330136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.330341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.330350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.330487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.330512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.330604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.330614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.330850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.330882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.331891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.331900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.332932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.332963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.333175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.333206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.971 [2024-11-06 12:38:37.333329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.971 [2024-11-06 12:38:37.333360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.971 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.333493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.333526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.333765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.334048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.334079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.334361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.334432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.334748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.334783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.334984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.335017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.335256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.335601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.335633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.335824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.335833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.335922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.335931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.336763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.336794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.337857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.337888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.338085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.338116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.338244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.338276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.338562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.338720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.338751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.339059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.339091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.339294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.339303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.339551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.339561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.339646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.339655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.339828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.340045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.340076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.340328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.340360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.340619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.340629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.340811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.340841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.341068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.341099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.341369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.341378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.972 [2024-11-06 12:38:37.341481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.972 [2024-11-06 12:38:37.341515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.972 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.341645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.341676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.341938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.341969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.342168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.342198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.342478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.342511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.342713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.342723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.342953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.342962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.343857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.343989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.344916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.344927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.345076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.345085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.345229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.345433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.345764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.345974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.346009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.346197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.346230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.346362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.346395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.346571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.346605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.346814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.346840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.346998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.347009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.347249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.347281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.347479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.347512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.347707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.347890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.347899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.347992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.348001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.973 [2024-11-06 12:38:37.348106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.973 [2024-11-06 12:38:37.348115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.973 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.348203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.348306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.348397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.348587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.348736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.348986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.349018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.349216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.349247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.349445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.349483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.349719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.349751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.349976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.350007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.350127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.350165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.350345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.350377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.350565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.350597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.350774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.350783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.350971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.351131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.351162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.351415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.351447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.351670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.351702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.351886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.351918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.352035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.352065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.352199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.352231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.352442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.352452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.352648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.352681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.352888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.352919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.353056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.353087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.353266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.353297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.353561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.353733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.353742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.353827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.353837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.354882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.354914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.355168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.355199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.355450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.355490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.355648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.974 [2024-11-06 12:38:37.355678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.974 qpair failed and we were unable to recover it. 00:32:05.974 [2024-11-06 12:38:37.355881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.356156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.356186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.356421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.356460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.356636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.356646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.356825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.356834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.356974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.356983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.357067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.357077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.357289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.357321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.357571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.357604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.357854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.357884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.358085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.358118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.358315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.358324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.358520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.358559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.358758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.358789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.358919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.358950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.359082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.359113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.359329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.359371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.359603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.359613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.359706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.359715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.359925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.359956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.360155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.360186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.360471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.360504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.360634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.360665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.360861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.360892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.361086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.361118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.361290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.361510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.361520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.361696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.361705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.361944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.361953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.362108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.362139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.362395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.362425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.362627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.362637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.362808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.362898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.362907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.363054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.363063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.363205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.363236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.363545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.363578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.363756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.363765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.975 [2024-11-06 12:38:37.363841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.975 [2024-11-06 12:38:37.363850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.975 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.363946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.363977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.364230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.364261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.364467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.364499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.364734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.364743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.364878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.364887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.365031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.365063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.365252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.365479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.365511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.365689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.365699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.365912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.365942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.366971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.367216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.367473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.367482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.367563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.367574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.367807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.367838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.367980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.368011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.368260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.368292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.368496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.368529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.368783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.368814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.369009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.369041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.369270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.369301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.369441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.369450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.369610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.369852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.369861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.370010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.370019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.370102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.370111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.370327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.370358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.370611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.370644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.370838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.371049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.371080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.371298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.371330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.371457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.976 [2024-11-06 12:38:37.371515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.976 qpair failed and we were unable to recover it. 00:32:05.976 [2024-11-06 12:38:37.371793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.371823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.372022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.372053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.372253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.372285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.372474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.372767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.372777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.372938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.372969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.373160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.373191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.373373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.373404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.373556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.373767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.373776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.373970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.374196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.374227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.374469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.374502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.374651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.374682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.374882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.374913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.375054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.375092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.375287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.375318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.375503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.375536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.375724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.375732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.375917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.375948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.376160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.376191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.376321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.376352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.376643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.376652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.376827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.376836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.376947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.376977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.377120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.377152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.377437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.377488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.377591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.377600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.377690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.377937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.377946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.378811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.378970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.379002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.977 [2024-11-06 12:38:37.379126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.977 [2024-11-06 12:38:37.379156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.977 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.379299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.379331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.379608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.379640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.379891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.379970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.379979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.380193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.380225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.380407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.380438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.380651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.380683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.380852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.380934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.380943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.381127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.381136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.381357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.381389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.381588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.381621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.381867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.381876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.382021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.382052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.382256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.382287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.382537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.382757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.382768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.382914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.382923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.383144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.383175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.383453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.383505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.383698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.383728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.383922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.383953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.384157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.384189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.384330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.384373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.384529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.384539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.384693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.384724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.384871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.385184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.385215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.385411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.385420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.385604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.385638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.385852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.385883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.386138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.386170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.386288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.978 [2024-11-06 12:38:37.386636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.978 qpair failed and we were unable to recover it. 00:32:05.978 [2024-11-06 12:38:37.386910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.386919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.387072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.387104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.387291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.387321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.387527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.387559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.387753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.387942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.387985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.388223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.388255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.388449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.388495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.388779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.388810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.389181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.389252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.389536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.389606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.389911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.389981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.390122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.390155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.390286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.390558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.390688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.390698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.390937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.390967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.391115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.391147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.391347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.391377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.391584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.391593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.391755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.391786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.391990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.392160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.392380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.392603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.392779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.392924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.392955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.393237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.393268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.393533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.393543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.393745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.393843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.393852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.394959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.394968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.395134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.395165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.979 [2024-11-06 12:38:37.395430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.979 [2024-11-06 12:38:37.395470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.979 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.395609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.395640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.395918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.395927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.396952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.396984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.397232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.397263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.397520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.397530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.397715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.398057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.398280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.398489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.398613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.398760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.398973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.399006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.399227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.399487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.399497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.399660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.399692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.399876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.399908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.400193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.400384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.400393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.400615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.400810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.401047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.401078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.401215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.401248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.401432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.401471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.401768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.401799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.402023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.402055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.402305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.402338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.402467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.402477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.402688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.402721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.402842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.402873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.403151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.980 [2024-11-06 12:38:37.403184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.980 qpair failed and we were unable to recover it. 00:32:05.980 [2024-11-06 12:38:37.403399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.403431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.403766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.403838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.404005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.404241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.404273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.404527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.404537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.404746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.404778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.404938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.404970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.405151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.405183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.405313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.405345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.405532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.405566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.405817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.405849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.406041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.406072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.406267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.406298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.406565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.406599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.406723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.406754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.406982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.407310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.407475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.407635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.407872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.407903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.408159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.408190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.408445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.408724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.408733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.408896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.408905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.409066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.409098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.409216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.409248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.409402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.409642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.409682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.409882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.409914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.410170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.410479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.410640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.410650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.410895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.410926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.411152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.411185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.981 qpair failed and we were unable to recover it. 00:32:05.981 [2024-11-06 12:38:37.411414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.981 [2024-11-06 12:38:37.411445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.411671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.411705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.411980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.412011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.412323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.412355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.412658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.412691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.412984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.413017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.413207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.413466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.413630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.413826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.413858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.414086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.414118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.414373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.414406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.414636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.414670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.414803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.414835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.415858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.415997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.416146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.416324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.416621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.416767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.416887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.416896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.417049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.417059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.417148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.417157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.417366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.417618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.417650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.417778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.417809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.418024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.418055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.418288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.418320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.418546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.418805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.418836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.419069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.419099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.982 qpair failed and we were unable to recover it. 00:32:05.982 [2024-11-06 12:38:37.419321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.982 [2024-11-06 12:38:37.419352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.419564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.419573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.419720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.419763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.419887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.419918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.420172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.420422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.420452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.420658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.420691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.420805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.420836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.421028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.421060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.421257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.421288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.421501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.421812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.421842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.422056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.422288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.422319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.422511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.422542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.422742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.422774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.422997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.423029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.423212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.423242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.423511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.423521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.423720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.423844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.423875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.424076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.424107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.424302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.424334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.424622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.424763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.424774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.424926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.424935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.425167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.425305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.425315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.425552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.425585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.425787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.425818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.426047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.426080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.426256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.426456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.426498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.426756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.426788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.426927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.426959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.427179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.427211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.427408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.983 [2024-11-06 12:38:37.427438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.983 qpair failed and we were unable to recover it. 00:32:05.983 [2024-11-06 12:38:37.427584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.427616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.427777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.427786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.428285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.428480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.428635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.428792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.428980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.429012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.429284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.429571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.429604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.429893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.429937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.430088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.430097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.430360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.430392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.430596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.430629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.430848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.430879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.431064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.431073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.431230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.431239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.431486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.431519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.431698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.431730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.431880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.431912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.432184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.432214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.432343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.432376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.432646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.432679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.432989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.433021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.433205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.433236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.433416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.433447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.433653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.433685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.433931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.433968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.434162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.434194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.434376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.434407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.434552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.434561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.434742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.434751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.434901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.434909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.435166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.435175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.435338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.435347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.435417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.435426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.435653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.435663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.435822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.435831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.984 qpair failed and we were unable to recover it. 00:32:05.984 [2024-11-06 12:38:37.436062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.984 [2024-11-06 12:38:37.436094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.436220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.436252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.436432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.436471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.436604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.436636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.436827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.436836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.436990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.437022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.437133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.437164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.437357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.437390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.437674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.437707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.437859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.437890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.438916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.438947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.439140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.439172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.439495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.439637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.439668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.439955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.439989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.440118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.440149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.440262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.440294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.440434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.440497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.440752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.440784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.440996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.441934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.442906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.442915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.443074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.985 [2024-11-06 12:38:37.443208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.985 [2024-11-06 12:38:37.443217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.985 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.443287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.443296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.443433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.443442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.443648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.443658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.443738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.443748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.443839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.443848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.444914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.445911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.445995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.446148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.446390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.446786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.446942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.446973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.447151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.447183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.447290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.447321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.447453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.447493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.447737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.447746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.447885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.447916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.448222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.448254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.448440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.448497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.448643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.448681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.448853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.448862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.449959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.986 [2024-11-06 12:38:37.449979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.986 qpair failed and we were unable to recover it. 00:32:05.986 [2024-11-06 12:38:37.450202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.450499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.450716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.450873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.450951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.450959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.451937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.451980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.452094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.452125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.452311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.452343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.452595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.452628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.452770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.452800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.452932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.453157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.453189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.453296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.453329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.453511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.453544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.453728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.453761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.453957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.453988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.454216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.454248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.454430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.454469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.454651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.454684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.454868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.454900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.455985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.455994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.987 [2024-11-06 12:38:37.456113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.987 [2024-11-06 12:38:37.456145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.987 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.456279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.456310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.456431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.456494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.456630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.456776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.456798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.456986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.457149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.457299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.457516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.457735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.457915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.457953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.458914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.458942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.459076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.459108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.459374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.459405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.459642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.459652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.459718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.459727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.459869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.459878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.460033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.460064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.460268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.460301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.460497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.460529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.460641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.460674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.460952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.461098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.461108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.461299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.461332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.461542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.461575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.461715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.461747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.461923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.461932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.462081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.462223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.462315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.462478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.988 [2024-11-06 12:38:37.462561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.988 qpair failed and we were unable to recover it. 00:32:05.988 [2024-11-06 12:38:37.462635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.462733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.462742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.462893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.462924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.463087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.463248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.463280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.463476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.463510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.463763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.463773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.463927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.463937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.464956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.464965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.465135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.465167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.465376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.465408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.465557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.465591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.465782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.465792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.465882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.465891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.466041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.466051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.466115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.466295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.466307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.466521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.466557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.466764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.466795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.467063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.467072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.467257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.467289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.467517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.467550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.467772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.467781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.467877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.467902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.468317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.468349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.468535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.468545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.468737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.468770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.468899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.468930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.469053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.469085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.469218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.469249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.469528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.469562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.469761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.469770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.989 [2024-11-06 12:38:37.469934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.989 [2024-11-06 12:38:37.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.989 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.470077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.470088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.470322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.470354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.470495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.470528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.470754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.470786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.470999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.471009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.471181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.471191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.471335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.471344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.471611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.471621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.471803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.471813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.472077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.472086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.472311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.472321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.472612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.472623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.472717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.472726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.472932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.472942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.473119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.473128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.473357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.473367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.473546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.473556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.473806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.473816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.474922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.474932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.475033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.475042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.475287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.475296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.475444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.475453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.475663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.475673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.475879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.475889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.476124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.476360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.476369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.476514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.476524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.476600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.476610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.476842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.476851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.477098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.477108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.477367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.477399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.477728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.477761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.477966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.990 [2024-11-06 12:38:37.477998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.990 qpair failed and we were unable to recover it. 00:32:05.990 [2024-11-06 12:38:37.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.478308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.478497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.478530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.478721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.478759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.478933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.478942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.479218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.479557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.479591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.479787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.479797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.479958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.479989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.480195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.480227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.480542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.480585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.480766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.480775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.480930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.480962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.481231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.481263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.481408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.481692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.481894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.481925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.482052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.482084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.482283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.482316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.482513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.482547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.482823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.482855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.483052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.483084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.483276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.483307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.483522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.483554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.483814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.483846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.484109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.484118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.484268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.484277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.484454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.484467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.484682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.484712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.484823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.485188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.485260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.485478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.485516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.485830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.485862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.486072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.486084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.486260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.486269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.486431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.486469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.991 [2024-11-06 12:38:37.486673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.991 [2024-11-06 12:38:37.486706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.991 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.486959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.486990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.487169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.487178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.487394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.487426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.487672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.487887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.487897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.488092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.488123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.488348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.488385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.488649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.488683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.488810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.488841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.489044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.489076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.489345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.489376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.489509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.489542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.489682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.489713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.489911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.489942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.490181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.490201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.490361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.490508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.490518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.490610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.490619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.490803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.490835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.491018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.491049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.491236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.491268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.491505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.491701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.491732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.491926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.491935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.492071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.492080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.492290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.492323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.492544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.492796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.492805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.492986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.493272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.493304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.493453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.493494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.493711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.493744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.493941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.493972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.494213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.494283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.494435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.494488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.494748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.992 [2024-11-06 12:38:37.494930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.992 [2024-11-06 12:38:37.494939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.992 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.495182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.495492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.495528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.495675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.495707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.495942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.495952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.496169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.496178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.496361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.496477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.496510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.496722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.496945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.496976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.497119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.497273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.497282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.497514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.497551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.497669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.497679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.497946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.498831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.499064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.499074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.499234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.499243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.499484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.499517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.499716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.499748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.500060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.500092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.500213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.500244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.500474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.500508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.500761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.500770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.500915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.500946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.501142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.501173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.501366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.501398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.501652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.501686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.501805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.501836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.501971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.502002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.502284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.502316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.502585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.502618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.502863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.502934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.503230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.503302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.503489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.503525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.503833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.503865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.504073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.993 [2024-11-06 12:38:37.504104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.993 qpair failed and we were unable to recover it. 00:32:05.993 [2024-11-06 12:38:37.504363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.504372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.504531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.504541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.504616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.504625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.504756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.504765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.504919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.504950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.505083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.505114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.505395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.505426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.505564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.505596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.505872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.505905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.506833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.506843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.507016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.507048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.507225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.507257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.507396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.507427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.507705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.507715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.507872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.507903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.508123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.508155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.508345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.508377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.508665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.508698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.508899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.509009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.509018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.509226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.509257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.509414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.509445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.509690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.509723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.510000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.510031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.510266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.510297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.510570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.510604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.510832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.510841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.510994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.511003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.511162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.511193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.511340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.511371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.511513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.511553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.511824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.511856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.512107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.512140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.512422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.512454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.512723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.512756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.512982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.513014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.994 [2024-11-06 12:38:37.513257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.994 [2024-11-06 12:38:37.513266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.994 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.513491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.513501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.513773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.513805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.514112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.514144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.514266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.514298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.514581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.514867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.514900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.515976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.515986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.516242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.516252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.516408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.516598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.516608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.516807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.516838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.517026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.517060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.517279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.517506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.517540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.517879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.517911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.518215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.518224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.518407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.518438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.518643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.518675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.518881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.519247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.519279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.519514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.519547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.519801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.519833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.520119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.520151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.520298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.520329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.520584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.520618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.521030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.521063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.521337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.521368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.521666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.521707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.522113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.522122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.522351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.522361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.522529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.522539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.522753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.522785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.523003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.523035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.523314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.995 [2024-11-06 12:38:37.523345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.995 qpair failed and we were unable to recover it. 00:32:05.995 [2024-11-06 12:38:37.523645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.523900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.523932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.524053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.524062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.524285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.524317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.524571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.524605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.524860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.524891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.525210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.525220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.525479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.525490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.525688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.525698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.525936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.525967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.526270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.526302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.526484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.526518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.526743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.526957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.526989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.527173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.527205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.527425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.527465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.527663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.527697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.527889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.527899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.528162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.528195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.528485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.528518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.528754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.528786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.529058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.529090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.529240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.529442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.529452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.529699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.529732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.529863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.529895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.530168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.530200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.530430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.530471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.530597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.530628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.530825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.530857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.531112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.531144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.531425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.531455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.531702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.531741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.531892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.531923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.532195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.532369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.532400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.532738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.532772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.532904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.532914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.533120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.533129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.996 qpair failed and we were unable to recover it. 00:32:05.996 [2024-11-06 12:38:37.533371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.996 [2024-11-06 12:38:37.533404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.533704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.533737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.534858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.534889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.535080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.535112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.535338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.535370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.535635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.535668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.535878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.535911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.536170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.536203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.536506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.536517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.536778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.536788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.536859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.536877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.537049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.537060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.537300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.537332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.537574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.537607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.537852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.537884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.538206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.538238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.538519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.538531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.538694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.538919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.538929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.539090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.539099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.539328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.539361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.539566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.539599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.539710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.539740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.539972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.540125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.540135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.540319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.540350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.540539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.540573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.540836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.541016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.541027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.541204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.541235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.541521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.541555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.541779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.541810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.541960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.541992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.542246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.542255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.542421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.542431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.542664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.542676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.542851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.542860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.543094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.543103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.543289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.543299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.543457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.997 [2024-11-06 12:38:37.543470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.997 qpair failed and we were unable to recover it. 00:32:05.997 [2024-11-06 12:38:37.543694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.543910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.543919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.544887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.544898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.545953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.545964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.546933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.546943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.547837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.547847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.548983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.548992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.549169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.549401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.549411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.549549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.549559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.549641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.549651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.998 qpair failed and we were unable to recover it. 00:32:05.998 [2024-11-06 12:38:37.549752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.998 [2024-11-06 12:38:37.549761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.549940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.549950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.550237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.550247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.550484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.550494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.550657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.550667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.550836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.551836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.551845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.552872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.552882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.553980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.553990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.554879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.554889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.555926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.555936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.556100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.556110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:05.999 [2024-11-06 12:38:37.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:05.999 [2024-11-06 12:38:37.556213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:05.999 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.556357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.556368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.556447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.556456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.556685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.556696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.556892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.556902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.557901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.557910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.558911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.558920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.559007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.559017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.559224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.559233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.559341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.559421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.559431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.305 [2024-11-06 12:38:37.559534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.305 [2024-11-06 12:38:37.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.305 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.559633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.559643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.559701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.559710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.559790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.559800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.559893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.559903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.559970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.559979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.560955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.560965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.561932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.561941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.562955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.563058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.563068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.563236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.563268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.563484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.563518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.563782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.563813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.564040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.564073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.564362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.564547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.564574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.564851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.564892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.565257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.565290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.565551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.565912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.566200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.566210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.566546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.566872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.567072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.567105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.567290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.567322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.567614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.567840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.567849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.568130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.568442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.568484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.568723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.568757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.568983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.568993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.569184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.569193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.569284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.569293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.569389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.569399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.569631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.569641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.569802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.569835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.570175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.570425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.570760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.570795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.571082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.571114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.571425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.571605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.571747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.571779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.572062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.572095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.572348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.572380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.572622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.572656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.572843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.572852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.573101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.573134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.573337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.573369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.573616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.306 [2024-11-06 12:38:37.573651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.306 qpair failed and we were unable to recover it. 00:32:06.306 [2024-11-06 12:38:37.573959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.573991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.574135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.574145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.574330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.574358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.574650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.574684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.574945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.574977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.575255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.575289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.575516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.575549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.575843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.575877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.576071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.576081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.576273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.576306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.576602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.576635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.576767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.576794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.577030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.577040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.577234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.577245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.577454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.577473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.577756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.578023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.578055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.578365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.578414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.578679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.578704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.579027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.579059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.579329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.579360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.579658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.579693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.579916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.580162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.580172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.580434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.580473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.580747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.580780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.580987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.581149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.581193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.581497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.581530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.581798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.581831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.582139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.582171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.582682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.582951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.582961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.583199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.583209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.583418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.583428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.583644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.583677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.583884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.583918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.584180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.584214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.584514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.584635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.584668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.584777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.584787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.584986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.585237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.585270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.585557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.585629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.585863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.585898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.586176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.586209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.586480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.586514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.586816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.586848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.587112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.587146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.587345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.587375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.587607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.587639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.587849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.587881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.588135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.588170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.588477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.588511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.588796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.588829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.589007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.589017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.589187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.589219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.589508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.589821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.589854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.590142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.590175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.590456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.590639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.590670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.590924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.590956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.591268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.591404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.591652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.591686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.591949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.591983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.307 [2024-11-06 12:38:37.592227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.307 [2024-11-06 12:38:37.592260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.307 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.592488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.592522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.592810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.592843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.593016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.593025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.593265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.593298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.593512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.593547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.593732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.593765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.593993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.594027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.594334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.594343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.594628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.594661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.594940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.594972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.595167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.595204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.595351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.595360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.595602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.595636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.595837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.595869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.596150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.596180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.596358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.596369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.596605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.596616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.596765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.596774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.596939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.596972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.597249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.597281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.597530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.597565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.597788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.597820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.597994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.598005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.598162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.598193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.598474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.598508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.598807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.598840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.599047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.599079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.599283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.599315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.599498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.599508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.599753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.599784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.600064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.600098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.600351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.600580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.600591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.600744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.600777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.601057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.601091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.601341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.601350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.601596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.601606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.601748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.601758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.602024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.602035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.602314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.602569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.602604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.602917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.602950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.603231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.603477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.603512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.603699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.603733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.603928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.603962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.604154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.604164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.604425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.604467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.604685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.604719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.604912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.604923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.605186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.605197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.605438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.605481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.605794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.605839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.606084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.606093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.606336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.606370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.606553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.606593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.606794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.606826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.607054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.607086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.607369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.607379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.308 [2024-11-06 12:38:37.607632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.308 qpair failed and we were unable to recover it. 00:32:06.308 [2024-11-06 12:38:37.607812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.607823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.607981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.607991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.608259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.608269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.608485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.608518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.608754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.608786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.608991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.609023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.609278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.609312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.609587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.609597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.609781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.609792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.609896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.609928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.610182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.610214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.610321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.610353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.610630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.610640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.610854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.610884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.611151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.611185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.611389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.611422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.611587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.611597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.611833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.611843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.611992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.612026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.612285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.612318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.612605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.612639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.612872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.612904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.613178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.613210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.613434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.613444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.613701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.613734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.613936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.613969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.614223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.614475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.614509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.614795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.615049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.615082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.615279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.615310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.615586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.615597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.615837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.615870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.616152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.616162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.616370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.616380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.616549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.616562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.616791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.616800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.617009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.617019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.617099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.617128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.617413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.617447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.617728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.617762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.618942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.618953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.619861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.619870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.620105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.620114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.620338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.620503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.620514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.620762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.620773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.620995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.621005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.621186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.621195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.621408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.621419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.621633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.621644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.621853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.621863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.622100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.622110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.622275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.622285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.622493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.622503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.622709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.622718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.622854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.622863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.623130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.623140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.623277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.623287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.623444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.309 qpair failed and we were unable to recover it. 00:32:06.309 [2024-11-06 12:38:37.623593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.309 [2024-11-06 12:38:37.623603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.623685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.623695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.623916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.623926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.624065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.624075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.624307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.624318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.624479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.624492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.624669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.624679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.624935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.624945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.625965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.625975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.626915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.626924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.627166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.627176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.627394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.627404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.627552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.627562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.627802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.627812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.628040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.628309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.628490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.628758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.628920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.628992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.629001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.629229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.629240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.629470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.629480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.629762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.629772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.630979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.631220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.631231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.631465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.631475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.631739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.631749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.631956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.631966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.632134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.632145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.632307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.632319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.632527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.632537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.632771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.632780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.632936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.632946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.633164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.633174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.633266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.633276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.633445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.633454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.633697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.633707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.633941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.633951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.634222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.634377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.634387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.634629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.634639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.634854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.634864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.635130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.635304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.635539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.635550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.635772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.635781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.635986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.636249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.636258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.310 [2024-11-06 12:38:37.636491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.310 [2024-11-06 12:38:37.636501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.310 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.636603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.636622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.636786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.637043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.637053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.637320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.637329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.637538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.637549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.637769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.637930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.637940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.638906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.638917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.639080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.639090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.639299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.639310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.639547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.639557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.639767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.639776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.639938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.639947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.640148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.640182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.640473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.640508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.640721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.641041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.641073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.641259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.641291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.641510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.641545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.641765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.641797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.642012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.642047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.642234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.642268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.642477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.642511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.642725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.642758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.642979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.643292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.643324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.643606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.643641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.643864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.643898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.644148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.644183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.644499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.644759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.644770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.644934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.644967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.645273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.645306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.645561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.645706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.645739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.645939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.645971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.646238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.646272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.646508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.646518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.646771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.646805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.647899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.647910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.648174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.648207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.648349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.648381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.648586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.648621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.648835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.648867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.649129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.649161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.649504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.649538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.649822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.649855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.650055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.650087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.650344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.650378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.650576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.650589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.650867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.651185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.651217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.651487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.651497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.651683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.651716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.651947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.651980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.652290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.311 qpair failed and we were unable to recover it. 00:32:06.311 [2024-11-06 12:38:37.652487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.311 [2024-11-06 12:38:37.652521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.652729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.652762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.653043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.653076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.653360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.653393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.653655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.653666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.653849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.653858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.654009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.654042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.654201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.654234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.654424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.654702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.654713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.654963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.654973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.655178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.655188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.655368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.655400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.655542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.655576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.655710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.655742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.656020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.656053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.656266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.656485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.656519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.656730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.656763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.656973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.657003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.657202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95530 is same with the state(6) to be set 00:32:06.312 [2024-11-06 12:38:37.657508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.657897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.657926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.658148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.658185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.658395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.658430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.658809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.659036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.659069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.659360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.659391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.659675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.659708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.659849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.659883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.660086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.660117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.660336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.660368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.660599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.660633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.660780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.660812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.661101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.661137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.661405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.661438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.662052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.662085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.662272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.662306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.662501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.662534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.662788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.662822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.663027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.663037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.663125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.663135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.663382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.663391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.663600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.663610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.663774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.663806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.664101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.664135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.664330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.664371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.664547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.664558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.664709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.664719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.664901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.665112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.665144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.665370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.665403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.665650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.665684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.665997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.666030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.666244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.666254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.666486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.666521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.666742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.666775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.667077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.667298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.667330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.667513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.667525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.667731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.667741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.667896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.668087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.668098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.668363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.668396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.668538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.668571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.668830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.668863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.669114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.669147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.669447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.669457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.669605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.312 [2024-11-06 12:38:37.669615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.312 qpair failed and we were unable to recover it. 00:32:06.312 [2024-11-06 12:38:37.669788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.669820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.670044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.670054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.670156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.670165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.670325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.670356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.670627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.670661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.670944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.670977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.671264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.671275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.671515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.671525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.671766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.671799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.672025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.672057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.672308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.672340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.672596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.672606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.672674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.672684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.672906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.672915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.673161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.673171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.673315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.673325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.673545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.673556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.673789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.673827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.674014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.674046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.674248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.674486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.674521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.674854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.675060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.675266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.675276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.675491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.675526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.675811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.675843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.676133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.676166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.676380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.676390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.676543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.676594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.676858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.676890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.677185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.677216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.677514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.677548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.677700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.677731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.677929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.677960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.678175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.678207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.678474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.678507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.678696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.678729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.678848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.678881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.678992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.679024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.679222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.679255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.679453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.679500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.679700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.679733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.679916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.679949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.680163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.680196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.680383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.680624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.680662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.680852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.680886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.681840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.681872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.682009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.682041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.682257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.682290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.682499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.682533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.682794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.682826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.683017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.683059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.683247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.683511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.683539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.683777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.683786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.684014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.684023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.684187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.684198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.684297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.684308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.684395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.684406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.313 qpair failed and we were unable to recover it. 00:32:06.313 [2024-11-06 12:38:37.684546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.313 [2024-11-06 12:38:37.684556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.684662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.684672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.684765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.684775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.684873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.685907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.686899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.686909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.687947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.687957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.688048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.688057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.688333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.688364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.688639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.688673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.688880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.688912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.689198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.689440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.689450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.689649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.689676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.689977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.690292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.690601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.690612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.690914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.690947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.691545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.691579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.691890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.691922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.692189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.692222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.692528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.692562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.692859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.692892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.693191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.693223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.693493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.693503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.693662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.693902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.693912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.694147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.694179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.694456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.694502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.694783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.694814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.695017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.695049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.695304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.695336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.695491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.695524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.695833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.695865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.696158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.696190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.696448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.696494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.696789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.696820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.697145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.697335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.697367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.697613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.697625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.697854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.697864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.697947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.697957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.698115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.698127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.698396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.698431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.698713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.698746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.698940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.698972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.699256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.699289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.699592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.699603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.699848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.699858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.700040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.700051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.700233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.700264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.700603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.700637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.700895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.314 [2024-11-06 12:38:37.700926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.314 qpair failed and we were unable to recover it. 00:32:06.314 [2024-11-06 12:38:37.701135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.701166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.701443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.701688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.701721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.701924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.701956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.702152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.702310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.702320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.702484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.702495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.702671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.702682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.702840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.702850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.703120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.703129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.703264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.703274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.703551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.703812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.703844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.704087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.704350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.704382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.704683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.704716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.704971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.705003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.705206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.705237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.705437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.705481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.705721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.705731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.705941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.706166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.706176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.706348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.706590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.706625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.706848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.706881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.707102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.707285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.707318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.707623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.707658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.707929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.707963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.708284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.708582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.708627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.708789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.708798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.709006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.709265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.709299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.709559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.709570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.709720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.709730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.709898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.709941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.710221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.710254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.710486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.710522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.710689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.710700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.710948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.710980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.711257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.711289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.711495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.711529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.711761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.711771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.711982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.711993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.712212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.712222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.712472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.712483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.712672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.712683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.712844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.712854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.713070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.713081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.713173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.713183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.713417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.713449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.713756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.713790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.714058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.714089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.714391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.714401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.714500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.714510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.714607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.714846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.714857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.715075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.715108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.715390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.715422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.715705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.715716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.715954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.715964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.716197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.716207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.716443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.315 qpair failed and we were unable to recover it. 00:32:06.315 [2024-11-06 12:38:37.716655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.315 [2024-11-06 12:38:37.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.716823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.716833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.717099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.717132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.717420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.717454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.717746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.717757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.717930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.717942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.718104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.718136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.718278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.718313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.718601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.718635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.718896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.718927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.719268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.719481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.719515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.719799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.719831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.720133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.720164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.720478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.720719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.720977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.721009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.721229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.721260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.721525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.721559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.721860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.721870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.722077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.722110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.722393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.722424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.722690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.722701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.722861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.722871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.722948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.722958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.723210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.723242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.723396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.723427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.723739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.723750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.723962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.723973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.724121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.724132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.724370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.724380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.724543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.724554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.724657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.724668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.724816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.724827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.725070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.725332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.725368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.725521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.725556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.725763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.725797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.726064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.726096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.726310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.726346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.726558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.726569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.726734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.726744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.726832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.726878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.727135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.727169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.727367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.727400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.727546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.727778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.727812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.728128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.728456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.728502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.728789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.728821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.729088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.729118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.729421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.729691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.729723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.729936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.729969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.730245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.730519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.730553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.730759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.730792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.731047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.731080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.731235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.731377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.731388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.731672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.731704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.731909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.731941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.732215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.732247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.732526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.732536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.732692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.732703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.732886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.732918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.733174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.733207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.733483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.733518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.733772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.733804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.733949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.733982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.734124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.316 [2024-11-06 12:38:37.734155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.316 qpair failed and we were unable to recover it. 00:32:06.316 [2024-11-06 12:38:37.734371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.734405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.734644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.734873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.734883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.735118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.735128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.735382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.735393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.735638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.735796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.735806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.735981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.736012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.736215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.736246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.736438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.736477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.736681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.736691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.736925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.736935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.737105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.737114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.737431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.737740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.737777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.737984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.738015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.738306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.738317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.738528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.738538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.738802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.738811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.738972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.738982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.739190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.739200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.739345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.739355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.739556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.739590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.739729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.739960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.740176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.740208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.740412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.740444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.740698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.740733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.740949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.740980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.741178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.741188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.741402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.741435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.741570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.741604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.741857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.742170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.742201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.742505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.742538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.742746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.742778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.742966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.742997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.743276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.743286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.743440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.743485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.743760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.743792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.744098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.744130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.744353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.744363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.744589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.744763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.744772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.744939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.744949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.745218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.745249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.745431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.745471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.745659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.745692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.745975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.746006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.746208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.746239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.746496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.746530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.746825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.746835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.747060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.747093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.747371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.747414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.747659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.747672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.747882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.747892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.748100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.748109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.748353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.748677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.748877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.748910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.749161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.749473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.749507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.749795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.749805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.750068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.750078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.750293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.750303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.750439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.750448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.750636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.750670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.750967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.751000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.751284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.751316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.751607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.317 [2024-11-06 12:38:37.751642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.317 qpair failed and we were unable to recover it. 00:32:06.317 [2024-11-06 12:38:37.751921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.751954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.752105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.752137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.752453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.752762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.752795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.752914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.752946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.753230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.753263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.753478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.753512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.753792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.753801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.753888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.753898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.754137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.754170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.754439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.754482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.754767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.754777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.754940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.754951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.755169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.755201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.755439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.755482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.755731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.755741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.756008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.756041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.756310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.756342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.756622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.756633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.756892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.756921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.757143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.757377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.757387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.757563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.757819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.757850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.758164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.758202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.758510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.758545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.758856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.758888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.759138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.759170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.759445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.759487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.759691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.759722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.759841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.759873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.760858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.760869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.761034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.761066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.761353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.761385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.761490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.761806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.761816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.761910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.761943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.762904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.762913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.763070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.763216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.763248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.763449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.763493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.763697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.763731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.763914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.763947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.764984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.764993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.765127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.765137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.765313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.765424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.765456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.765671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.765704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.765939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.765978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.766304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.766338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.318 [2024-11-06 12:38:37.766663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.318 [2024-11-06 12:38:37.766696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.318 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.766880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.766913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.767166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.767199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.767477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.767488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.767620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.767630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.767872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.767904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.768090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.768264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.768314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.768601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.768635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.768851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.768861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.769808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.769840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.770043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.770076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.770305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.770336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.770550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.770584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.770845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.771130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.771163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.771348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.771380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.771641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.771673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.771876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.771887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.772116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.772127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.772289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.772299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.772434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.772444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.772600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.772611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.772900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.772932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.773212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.773244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.773375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.773407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.773670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.773704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.773886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.773895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.774102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.774135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.774375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.774409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.774570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.774604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.774815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.774976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.775007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.775212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.775251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.775547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.775582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.775808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.775840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.776027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.776059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.776312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.776346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.776631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.776855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.776866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.777078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.777109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.777407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.777735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.777769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.778048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.778367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.778399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.778658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.778669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.778863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.779125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.779158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.779418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.779450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.779733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.779766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.779895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.779927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.780205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.780237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.780367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.780398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.780682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.780692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.780923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.780933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.781072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.781082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.781357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.781389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.781581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.781591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.781747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.781757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.781904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.781915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.782090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.782124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.782430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.782440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.782667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.782677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.319 [2024-11-06 12:38:37.782888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.319 [2024-11-06 12:38:37.782919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.319 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.783134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.783166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.783427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.783466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.783654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.783891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.784112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.784123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.784374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.784637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.784648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.784785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.784795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.785990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.786085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.786095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.786326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.786335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.786487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.786497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.786735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.786768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.787076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.787108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.787315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.787350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.787484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.787494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.787714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.787747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.787955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.787988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.788258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.788291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.788524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.788610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.788620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.788814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.788823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.789001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.789012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.789121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.789153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.789436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.789474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.789744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.789775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.789987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.790019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.790277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.790309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.790621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.790657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.790836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.790845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.791030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.791393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.791482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.791774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.791802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.792065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.792102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.792405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.792438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.792687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.792698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.792840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.792875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.793079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.793112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.793393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.793425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.793722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.793755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.794033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.794067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.794273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.794306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.794588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.794625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.794808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.795035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.795078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.795391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.795423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.795669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.795706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.795872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.795882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.796164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.796196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.796446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.796455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.796655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.796689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.796881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.796914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.797102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.797134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.797336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.797369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.797638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.797673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.797886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.798204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.798238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.798516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.320 [2024-11-06 12:38:37.798526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.320 qpair failed and we were unable to recover it. 00:32:06.320 [2024-11-06 12:38:37.798838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.798871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.799062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.799111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.799398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.799431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.799723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.799733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.799962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.799973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.800075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.800085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.800313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.800323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.800531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.800542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.800704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.800950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.800960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.801162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.801171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.801334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.801344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.801485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.801518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.801829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.801873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.802165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.802198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.802479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.802515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.802708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.802742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.802997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.803007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.803259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.803362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.803394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.803695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.803729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.804000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.804032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.804396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.804601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.804929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.804961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.805240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.805273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.805504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.805548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.805861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.805894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.806120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.806152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.806440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.806489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.806714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.806724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.806892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.806902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.807145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.807387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.807420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.807631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.807667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.807902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.807912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.808068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.808078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.808294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.808327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.808608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.808641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.808873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.808907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.809245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.809278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.809510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.809544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.809742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.809773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.809971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.810005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.810118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.810150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.810487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.810520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.810727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.810760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.811033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.811065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.811272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.811306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.811578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.811588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.811817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.811827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.811976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.811986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.812080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.812104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.812456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.812538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.812845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.812881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.813088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.813121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.813335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.813370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.813666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.813701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.813932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.813963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.814180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.814213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.814521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.814553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.814756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.814790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.815086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.815119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.815269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.815302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.815558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.815601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.815750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.815760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.815926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.321 [2024-11-06 12:38:37.815936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.321 qpair failed and we were unable to recover it. 00:32:06.321 [2024-11-06 12:38:37.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.816086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.816298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.816308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.816559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.816570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.816775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.816807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.817007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.817039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.817241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.817272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.817557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.817591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.817808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.817818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.817990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.818023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.818279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.818311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.818582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.818617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.818908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.818940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.819181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.819214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.819359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.819391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.819679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.819714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.819986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.819996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.820160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.820170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.820425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.820466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.820729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.820761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.821061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.821070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.821210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.821230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.821444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.821485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.821824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.822099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.822109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.822219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.822253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.822531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.822564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.822851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.822896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.823180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.823213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.823479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.823513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.823710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.823720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.823790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.823806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.823955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.823966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.824153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.824163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.824396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.824405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.824574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.824585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.824820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.824829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.825871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.825881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.826019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.826028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.826182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.826215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.826430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.826473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.826686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.826972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.826982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.827189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.827199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.827433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.827442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.827646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.827681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.827877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.827909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.828127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.828160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.828450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.828495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.828625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.828654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.828791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.828801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.828963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.828972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.829213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.829222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.829356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.829367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.829602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.829612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.829803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.829836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.830036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.830068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.830345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.830377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.830577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.830611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.830858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.830889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.831118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.831150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.831385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.831423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.831695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.831706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.831812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.832001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.832033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.832336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.322 qpair failed and we were unable to recover it. 00:32:06.322 [2024-11-06 12:38:37.832628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.322 [2024-11-06 12:38:37.832661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.832970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.833003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.833268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.833300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.833502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.833536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.833781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.833791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.833962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.834277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.834310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.834595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.834629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.834911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.834943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.835165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.835198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.835387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.835766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.835792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.836957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.836968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.837194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.837225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.837482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.837517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.837726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.837759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.838372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.838403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.838624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.838659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.838848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.838857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.839125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.839157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.839358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.839390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.839698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.839732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.840004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.840160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.840192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.840420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.840451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.840652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.840685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.840993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.841025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.841313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.841494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.841528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.841805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.841817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.841916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.841926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.842132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.842142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.842238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.842269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.842587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.842621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.842830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.842863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.843121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.843142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.843353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.843384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.843566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.843601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.843855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.843888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.844885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.844918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.845175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.845429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.845473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.845689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.845721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.845935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.845969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.846162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.846172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.846257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.846267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.846435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.846709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.846742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.847021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.847053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.847248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.847282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.847446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.847763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.848044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.848275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.848307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.323 [2024-11-06 12:38:37.848600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.323 [2024-11-06 12:38:37.848634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.323 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.848838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.848870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.849064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.849074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.849298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.849330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.849534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.849569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.849757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.849790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.849994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.850027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.850329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.850361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.850637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.850647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.850727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.850737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.850881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.850892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.851054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.851086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.851221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.851255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.851400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.851647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.851658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.851812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.851854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.852051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.852243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.852274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.852454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.852502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.852722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.853067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.853100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.853431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.853473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.853757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.853789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.854050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.854081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.854398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.854431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.854726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.854760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.854964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.854997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.855257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.855266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.855448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.855496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.855679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.855710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.856008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.856042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.856166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.856176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.856429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.856488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.856778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.856810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.856994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.857004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.857270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.857302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.857604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.857639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.857923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.857933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.858101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.858133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.858341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.858373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.858669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.858704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.859021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.859031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.859300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.859332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.859547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.859580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.859793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.859826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.860028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.860038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.860218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.860252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.860533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.860568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.860781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.860814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.861000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.861009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.861167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.861206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.861438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.861482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.861680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.861856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.861867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.862026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.862037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.862306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.862338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.862520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.862555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.862792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.862997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.863008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.863230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.863262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.863519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.863554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.863860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.863873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.864010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.864020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.864269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.864301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.864563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.864598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.864797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.864828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.865014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.865047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.865253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.865286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.324 [2024-11-06 12:38:37.865568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.324 [2024-11-06 12:38:37.865579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.324 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.865842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.865866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.866173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.866207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.866489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.866523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.866833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.866865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.867076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.867108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.867383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.867416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.867634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.867803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.867836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.868164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.868235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.868649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.868879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.868916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.869231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.869265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.869561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.869597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.869913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.869947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.870232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.870264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.870491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.870525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.870809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.870841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.871029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.871061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.871316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.871349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.871621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.871656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.871899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.871930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.872215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.872247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.872405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.872437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.872659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.872671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.872861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.872932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.873246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.873282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.873630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.873665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.873952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.873985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.874300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.874332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.874589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.874622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.874919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.874951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.875233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.875266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.875491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.875525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.875803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.875835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.876126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.876342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.876351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.876558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.876568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.876803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.876813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.877090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.877123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.877315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.877346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.877536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.877570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.877852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.877882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.878484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.878517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.878802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.878834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.879034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.879044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.879307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.879316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.879462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.879472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.879724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.879925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.879956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.880215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.880246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.880482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.880516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.880636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.880646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.880824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.880834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.881035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.881067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.881355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.881387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.881671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.881704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.881977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.881987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.882218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.882228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.325 [2024-11-06 12:38:37.882363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.325 [2024-11-06 12:38:37.882373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.325 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.882547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.882581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.882800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.882831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.883095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.883128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.883271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.883303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.883588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.883622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.883754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.883786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.883878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.883888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.884043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.884052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.884206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.884215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.884364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.884397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.884554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.884588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.884845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.884878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.885138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.885148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.885366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.885398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.885567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.885577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.885819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.885850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.886067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.886099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.886290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.886322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.886567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.886577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.886812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.886822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.887037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.887046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.887340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.887371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.887587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.887620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.887902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.887934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.888139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.888170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.888377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.888408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.888666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.888677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.888860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.888892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.889148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.889186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.889412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.889444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.889683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.889693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.889795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.889838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.890047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.890080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.890380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.890412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.890707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.890740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.891021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.891053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.891191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.891222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.891530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.891822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.891854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.892128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.892138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.892401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.892410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.892797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.326 [2024-11-06 12:38:37.892806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.326 qpair failed and we were unable to recover it. 00:32:06.326 [2024-11-06 12:38:37.893040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.893983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.894144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.894154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.894322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.894331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.894505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.894539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.894833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.894842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.327 [2024-11-06 12:38:37.895835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.327 [2024-11-06 12:38:37.895845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.327 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.896027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.896036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.896249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.896258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.896496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.896506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.896673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.896683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.896843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.897063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.897073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.897310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.897320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.897532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.897544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.607 qpair failed and we were unable to recover it. 00:32:06.607 [2024-11-06 12:38:37.897699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.607 [2024-11-06 12:38:37.897709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.897978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.897987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.898151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.898319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.898485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.898674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.898839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.898997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.899007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.899229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.899470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.899480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.899638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.899648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.899814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.899824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.900089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.900098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.900338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.900348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.900638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.900672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.900975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.901008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.901279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.901310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.901501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.901535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.901757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.901767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.902038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.902069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.902346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.902377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.902602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.902637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.902891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.902923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.903166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.903426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.903472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.903755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.903764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.903906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.903915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.904085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.904118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.904400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.904431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.904753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.904786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.608 qpair failed and we were unable to recover it. 00:32:06.608 [2024-11-06 12:38:37.905087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.608 [2024-11-06 12:38:37.905119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.905321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.905352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.905612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.905647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.905851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.905862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.906025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.906057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.906396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.906601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.906611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.906758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.906768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.906905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.906916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.907076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.907088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.907192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.907201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.907424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.907457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.907619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.907870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.908189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.908221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.908506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.908539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.908858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.909145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.909176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.909474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.909508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.909784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.909980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.910222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.910452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.910520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.910740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.910772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.911053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.911085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.911388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.911397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.911540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.911550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.911775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.911785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.911932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.911941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.912180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.912190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.912416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.912425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.609 [2024-11-06 12:38:37.912649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.609 [2024-11-06 12:38:37.912659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.609 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.912833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.912843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.913088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.913120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.913436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.913478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.913721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.913731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.913894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.913925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.914123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.914155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.914368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.914401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.914617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.914651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.914959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.914990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.915127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.915215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.915225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.915426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.915469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.915688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.915721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.915938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.915971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.916251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.916260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.916497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.916508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.916659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.916669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.916890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.916902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.917141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.917172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.917455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.917497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.917815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.917950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.917981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.918266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.918297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.918451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.918502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.918640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.918673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.918960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.918970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.919202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.919211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.919380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.919389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.919656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.919691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.919880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.920048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.610 qpair failed and we were unable to recover it. 00:32:06.610 [2024-11-06 12:38:37.920293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.610 [2024-11-06 12:38:37.920303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.920527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.920561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.920765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.920797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.921064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.921096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.921390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.921421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.921743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.921776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.922023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.922032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.922183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.922192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.922439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.922482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.922682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.922991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.923001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.923216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.923226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.923502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.923534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.923891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.923962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.924267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.924278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.924483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.924518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.924791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.924800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.924975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.924984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.925196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.925411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.925704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.925714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.925954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.925986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.926248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.926281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.926594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.926630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.926885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.926916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.927066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.927099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.927382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.927424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.927710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.927743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.928013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.928023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.928372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.611 [2024-11-06 12:38:37.928660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.611 [2024-11-06 12:38:37.928694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.611 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.928895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.928927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.929218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.929250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.929536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.929568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.929771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.929804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.930101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.930121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.930227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.930236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.930452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.930494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.930779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.930811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.931122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.931155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.931472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.931505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.931710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.931742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.931960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.931970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.932100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.932131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.932349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.932381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.932591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.932625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.932909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.932941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.933133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.933142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.933408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.933441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.933647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.933679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.933958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.933990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.934290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.934323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.934529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.934563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.934851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.934882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.935171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.935180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.935433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.935443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.935604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.935615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.935831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.935863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.936045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.936077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.936372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.936404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.936560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.936593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.936878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.936911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.937178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.937339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.937348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.937607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.612 qpair failed and we were unable to recover it. 00:32:06.612 [2024-11-06 12:38:37.937857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.612 [2024-11-06 12:38:37.937867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.938115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.938380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.938414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.938639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.938672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.938950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.938959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.939192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.939201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.939353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.939363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.939608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.939842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.939873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.940070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.940102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.940367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.940399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.940634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.940864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.940894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.941196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.941206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.941295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.941534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.941567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.941866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.941897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.942030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.942052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.942310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.942351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.942647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.942680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.942984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.943287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.943319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.943585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.943618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.943897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.943929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.944165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.944196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.944478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.944513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.944660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.944691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.944922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.944932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.945166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.945195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.945375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.945402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.945623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.945635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.945901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.613 [2024-11-06 12:38:37.945910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.613 qpair failed and we were unable to recover it. 00:32:06.613 [2024-11-06 12:38:37.946097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.946107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.946385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.946416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.946737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.946771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.947050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.947060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.947227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.947259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.947402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.947435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.947750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.947784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.948043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.948074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.948225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.948257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.948517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.948561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.948755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.948827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.948961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.948974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.949177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.949219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.949418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.949450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.949694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.949728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.949933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.949943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.950041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.950050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.950268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.950300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.950509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.950543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.950813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.950846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.951104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.951342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.951352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.951597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.951630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.951833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.951843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.952090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.952123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.952405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.952438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.952766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.953015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.953025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.953246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.953255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.953467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.953476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.953771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.953780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.954011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.954020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.954260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.954440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.954685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.954718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.955001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.955033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.955248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.955279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.955560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.955595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.614 [2024-11-06 12:38:37.955803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.614 [2024-11-06 12:38:37.955835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.614 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.956105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.956114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.956392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.956424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.956740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.956773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.956991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.957023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.957286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.957295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.957434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.957444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.957716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.957990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.958235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.958265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.958485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.958518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.958798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.958807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.959024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.959033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.959297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.959306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.959558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.959568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.959789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.959821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.960133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.960164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.960423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.960432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.960668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.960678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.960888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.960897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.961053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.961084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.961370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.961403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.961620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.961653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.961889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.961922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.962128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.962138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.962295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.962327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.962610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.962643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.962865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.962897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.963100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.963414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.963445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.963747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.963779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.964059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.964092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.964377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.964408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.964699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.964734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.965019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.965051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.615 qpair failed and we were unable to recover it. 00:32:06.615 [2024-11-06 12:38:37.965337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.615 [2024-11-06 12:38:37.965369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.965558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.965592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.965859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.965891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.966163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.966174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.966386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.966396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.966548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.966558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.966734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.966773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.967033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.967346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.967378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.967670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.967704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.967859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.967890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.968081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.968091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.968255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.968285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.968569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.968602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.968918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.968948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.969176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.969207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.969406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.969437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.969772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.970035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.970067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.970343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.970374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.970667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.970701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.970986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.971250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.971282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.971430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.971471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.971754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.971763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.971846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.972089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.972121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.972397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.972428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.972728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.972761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.973043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.973075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.973298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.973329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.973622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.973654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.973840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.973872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.974151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.616 [2024-11-06 12:38:37.974160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.616 qpair failed and we were unable to recover it. 00:32:06.616 [2024-11-06 12:38:37.974380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.974412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.974708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.974978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.975009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.975278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.975287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.975529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.975798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.975830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.976032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.976065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.976248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.976278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.976562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.976596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.976891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.976928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.977044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.977076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.977294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.977303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.977532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.977565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.977694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.977724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.977921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.977952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.978225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.978256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.978569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.978603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.978865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.978897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.979212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.979244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.979449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.979492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.979693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.979977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.980158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.980167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.980336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.980345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.980614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.980648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.980880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.980911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.981138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.981170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.981455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.981496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.981755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.981788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.982019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.982050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.982369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.982659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.982691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.982915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.617 [2024-11-06 12:38:37.982948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.617 qpair failed and we were unable to recover it. 00:32:06.617 [2024-11-06 12:38:37.983155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.983164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.983365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.983676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.983710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.983987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.984019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.984239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.984249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.984474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.984508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.984821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.984853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.985125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.985157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.985449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.985505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.985731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.985764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.986076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.986107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.986377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.986410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.986680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.986713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.987015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.987047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.987319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.987328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.987497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.987532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.987767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.987804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.988011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.988043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.988268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.988300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.988579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.988613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.988846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.988877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.989390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.989666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.989700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.989946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.989978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.990297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.990329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.990614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.990647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.990836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.990868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.991138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.991147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.991393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.991717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.991751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.618 [2024-11-06 12:38:37.992037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.618 [2024-11-06 12:38:37.992069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.618 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.992264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.992296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.992519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.992554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.992753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.992785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.993004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.993037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.993290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.993299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.993512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.993522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.993799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.994001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.994033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.994286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.994294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.994540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.994550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.994761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.994770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.994925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.994934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.995241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.995500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.995534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.995844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.995875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.996216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.996363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.996395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.996630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.996665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.996869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.996900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.997186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.997196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.997490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.997524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.997830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.997862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.998124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.998134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.998401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.998410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.998648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.998660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.998889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.999532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.999567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:37.999838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:37.999876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:38.000128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.619 [2024-11-06 12:38:38.000138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.619 qpair failed and we were unable to recover it. 00:32:06.619 [2024-11-06 12:38:38.000349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.000358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.000439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.000448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.000703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.000777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.001020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.001058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.001343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.001376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.001662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.001698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.001987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.002019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.002212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.002224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.002389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.002399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.002551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.002561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.002795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.002805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.003013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.003022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.003262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.003271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.003358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.003368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.003639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.003983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.004014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.004318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.004531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.004541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.004678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.004688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.004856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.004887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.005174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.005206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.005598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.005672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.005926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.005961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.006174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.006205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.006412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.006447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.006723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.006756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.006951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.006960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.007204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.007237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.007423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.007455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.007742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.007775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.008043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.008052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.008268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.008278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.008500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.620 qpair failed and we were unable to recover it. 00:32:06.620 [2024-11-06 12:38:38.008653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.620 [2024-11-06 12:38:38.008664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.008879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.008917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.009117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.009150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.009429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.009470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.009793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.010059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.010068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.010291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.010301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.010567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.010577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.010722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.010731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.010948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.010980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.011239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.011272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.011516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.011550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.011812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.011845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.012154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.012163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.012494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.012529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.012813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.013093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.013130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.013361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.013371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.013491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.013502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.013729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.013738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.013961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.014261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.014292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.014585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.014619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.014903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.014934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.015247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.015279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.015495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.015528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.015845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.015880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.016112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.016143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.016379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.016421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.016656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.016692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.621 qpair failed and we were unable to recover it. 00:32:06.621 [2024-11-06 12:38:38.016994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.621 [2024-11-06 12:38:38.017027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.017230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.017240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.017480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.017515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.017718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.017750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.018030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.018063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.018218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.018251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.018386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.018418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.018559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.018591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.018734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.018766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.019041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.019074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.019306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.019315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.019566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.019609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.019880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.019913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.020172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.020203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.020424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.020434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.020609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.020961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.021231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.021263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.021477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.021511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.021745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.022034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.022045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.022200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.022209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.022430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.022673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.022705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.022918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.022952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.023242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.023277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.023559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.023595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.023823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.023856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.024167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.024201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.024490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.024523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.024831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.024864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.025132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.025166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.025393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.025425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.025693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.025728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.025938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.025971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.026237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.026271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.026494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.026528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.026761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.026801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.027062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.027396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.027438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.027749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.027786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.622 [2024-11-06 12:38:38.028081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.622 [2024-11-06 12:38:38.028114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.622 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.028243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.028275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.028478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.028488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.028721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.028752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.029089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.029121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.029343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.029353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.029574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.029810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.030135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.030168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.030362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.030532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.030544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.030775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.030808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.031083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.031115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.031361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.031371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.031598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.031634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.031803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.031836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.032044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.032076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.032362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.032629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.032664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.032949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.032980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.033220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.033253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.033536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.033568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.033793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.033830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.033981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.033991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.034178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.034210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.034491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.034523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.034799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.034832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.035126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.035158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.035440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.035487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.035694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.035725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.035986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.036019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.036200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.036232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.036447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.036456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.036621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.036655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.036855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.036887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.037080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.037112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.037325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.037335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.037600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.037645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.037801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.037833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.038094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.038127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.038407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.623 [2024-11-06 12:38:38.038440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.623 qpair failed and we were unable to recover it. 00:32:06.623 [2024-11-06 12:38:38.038731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.038764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.039055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.039089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.039371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.039404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.039643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.039677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.039878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.039913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.040172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.040205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.040420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.040452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.040780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.040814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.041082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.041114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.041326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.041336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.041520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.041531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.041765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.041775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.041880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.041890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.042110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.042434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.042474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.042697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.042731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.043031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.043065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.043323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.043357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.043666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.043700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.043834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.043868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.044124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.044133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.044308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.044318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.044477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.044487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.044761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.044794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.044925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.044959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.045215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.045455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.045471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.045626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.045637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.045792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.045821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.046101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.046335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.046368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.046635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.046671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.046879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.047189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.047223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.047505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.047516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.047727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.047738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.047843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.047855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.048125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.048298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.048340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.048612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.048646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.048841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.049070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.049103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.049297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.049556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.624 [2024-11-06 12:38:38.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.624 qpair failed and we were unable to recover it. 00:32:06.624 [2024-11-06 12:38:38.049804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.049814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.050022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.050051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.050309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.050342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.050589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.050623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.050911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.050944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.051080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.051111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.051386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.051397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.051680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.051690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.051844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.051855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.052037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.052068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.052326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.052359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.052497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.052531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.052867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.052901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.053157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.053168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.053420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.053451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.053748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.053782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.054062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.054095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.054280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.054311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.054497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.054508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.054756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.054788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.054993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.055026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.055265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.055275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.055447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.055697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.055730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.056034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.056066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.056386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.056418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.056679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.056901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.056933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.057211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.057221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.057337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.057370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.057645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.057680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.057995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.058201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.058239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.058520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.058531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.058667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.058678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.058832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.058842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.059003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.059012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.059251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.059283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.059560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.059592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.059894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.059925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.060194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.060205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.060375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.060386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.060545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.060556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.060825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.060856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.060990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.625 [2024-11-06 12:38:38.061021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.625 qpair failed and we were unable to recover it. 00:32:06.625 [2024-11-06 12:38:38.061311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.061342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.061646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.061912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.061945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.062241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.062272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.062553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.062585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.062773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.062805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.063085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.063117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.063328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.063338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.063514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.063547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.063768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.063800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.064015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.064047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.064250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.064282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.064543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.064754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.064765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.064931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.064965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.065246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.065279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.065509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.065543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.065760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.065793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.065978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.066010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.066216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.066248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.066540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.066766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.066798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.067085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.067118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.067373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.067405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.067691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.067725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.067918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.067951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.068152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.068185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.068391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.068404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.068670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.068704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.068976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.069010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.069224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.069233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.069449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.069463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.069734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.069767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.069903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.069935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.070120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.070152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.070370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.070408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.070568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.070578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.070743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.070776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.071032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.071247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.071280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.071587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.626 [2024-11-06 12:38:38.071598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.626 qpair failed and we were unable to recover it. 00:32:06.626 [2024-11-06 12:38:38.071746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.071755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.071846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.071856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.072854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.072885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.073141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.073175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.073356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.073388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.073588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.073599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.073814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.073847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.074174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.074207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.074405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.074437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.074568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.074594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.074777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.074938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.074948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.075159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.075169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.075406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.075416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.075599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.075610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.075824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.075834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.076061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.076094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.076293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.076325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.076530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.076564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.076769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.076802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.076988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.077021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.077297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.077533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.077568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.078165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.078198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.078475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.078486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.078648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.078658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.078835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.078867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.079064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.079096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.079355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.079394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.079577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.079588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.079739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.079772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.079996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.080029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.080221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.080253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.080436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.080562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.080572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.080810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.080819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.081074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.081331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.081363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.081553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.081586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.627 [2024-11-06 12:38:38.081828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.627 [2024-11-06 12:38:38.081861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.627 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.082039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.082049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.082140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.082378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.082411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.082708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.082741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.082980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.083011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.083325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.083364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.083644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.083678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.083963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.083996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.084126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.084157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.084476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.084486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.084736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.084887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.085071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.085228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.085259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.085533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.085567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.085837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.085870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.086014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.086047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.086235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.086266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.086485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.086644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.086677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.086961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.086998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.087282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.087313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.087601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.087920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.087952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.088169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.088179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.088365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.088398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.088639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.088677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.088991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.089024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.089314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.089523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.089533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.089699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.089733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.089969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.090002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.090245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.090279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.090587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.090623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.090861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.090893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.091170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.091204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.091482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.091493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.091707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.091717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.091954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.091965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.092208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.092219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.092443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.092612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.092623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.628 qpair failed and we were unable to recover it. 00:32:06.628 [2024-11-06 12:38:38.092723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.628 [2024-11-06 12:38:38.092733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.092978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.092989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.093144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.093154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.093394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.093404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.093584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.093595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.093832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.093843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.093996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.094955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.094964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.095176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.095186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.095428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.095439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.095629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.095640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.095878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.095887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.095980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.095989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.096223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.096235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.096465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.096476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.096754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.096765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.097000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.097010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.097243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.097253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.097512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.097523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.097703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.097713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.097947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.097958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.098198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.098208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.098370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.098380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.098614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.098624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.098870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.099937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.099947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.100902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.101159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.101169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.101314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.101324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.101487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.101498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.101697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.101707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.101916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.101925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.102091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.102101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.102288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.102297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.102536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.102547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.629 [2024-11-06 12:38:38.102727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.629 [2024-11-06 12:38:38.102736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.629 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.102898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.102930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.103240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.103551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.103584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.103846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.103877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.104153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.104185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.104379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.104410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.104690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.104701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.104848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.104885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.105112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.105144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.105443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.105453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.105684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.105867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.105899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.106162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.106194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.106497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.106508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.106707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.106877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.107088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.107098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.107312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.107322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.107471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.107482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.107739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.107771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.107966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.107999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.108203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.108213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.108428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.108470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.108799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.108831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.109955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.109988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.110245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.110277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.110493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.110526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.110790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.110822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.111079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.111112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.111355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.111784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.111855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.112214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.112287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.112639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.112673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.113000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.113269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.113281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.113426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.113467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.113667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.113700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.113919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.630 [2024-11-06 12:38:38.113953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.630 qpair failed and we were unable to recover it. 00:32:06.630 [2024-11-06 12:38:38.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.114187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.114388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.114398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.114552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.114563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.114745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.114778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.115066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.115106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.115266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.115276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.115474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.115507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.115767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.115801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.115989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.116022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.116241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.116276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.116465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.116476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.116651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.116661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.116862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.117146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.117178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.117392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.117425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.117726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.117742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.118940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.118951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.119143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.119177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.119393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.119425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.119721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.119754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.119962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.119995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.120254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.120287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.120431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.120441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.120598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.120609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.120789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.120798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.120978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.121010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.121354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.121603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.121614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.121863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.121873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.122014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.122024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.122243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.122276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.122478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.122511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.122793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.122827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.123032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.123065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.123343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.123375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.123666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.123677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.123822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.123832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.124080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.124114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.124345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.124378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.124646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.124658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.631 [2024-11-06 12:38:38.124823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.631 [2024-11-06 12:38:38.124857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.631 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.125168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.125200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.125444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.125488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.125685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.125717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.125904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.125936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.126210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.126220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.126428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.126438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.126730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.126907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.127122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.127408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.127443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.127721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.127731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.127978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.128349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.128381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.128496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.128745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.128755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.128991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.129023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.129278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.129309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.129494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.129504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.129727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.129736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.129869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.129880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.130035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.130045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.130291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.130552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.130562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.130769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.130814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.131125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.131158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.131472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.131544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.131868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.132127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.132160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.132471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.132506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.132785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.133099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.133133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.133263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.133296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.133569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.133580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.133736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.134020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.134251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.134285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.134494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.134528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.134815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.134849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.135132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.135172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.135455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.135505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.135756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.135766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.136035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.136068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.136253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.136264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.136480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.136491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.136641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.136909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.136920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.137190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.137200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.137416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.137426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.137505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.137516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.137726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.137736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.632 [2024-11-06 12:38:38.137965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.632 [2024-11-06 12:38:38.137976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.632 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.138155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.138166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.138329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.138339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.138547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.138557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.138709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.138878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.138888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.139838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.139847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.140000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.140009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.140281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.140291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.140517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.140527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.140773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.140801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.140979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.141182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.141193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.141436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.141705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.141715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.141872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.141882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.142101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.142135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.142396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.142428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.142702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.142937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.142969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.143241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.143273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.143571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.143581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.143723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.143733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.143932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.143980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.144268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.144299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.144497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.144508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.144731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.144765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.144995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.145029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.145255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.145288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.145489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.145523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.145808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.145842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.146139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.146170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.146477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.146510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.146800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.146833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.147028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.147060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.147191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.147223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.147550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.147584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.147795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.147827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.148080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.148110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.148492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.148693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.148725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.633 [2024-11-06 12:38:38.148863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.633 [2024-11-06 12:38:38.148894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.633 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.149209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.149240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.149438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.149478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.149737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.149768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.149966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.149998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.150263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.150295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.150528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.150538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.150714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.150977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.151007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.151380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.151452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.151682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.151711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.151961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.151995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.152313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.152345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.152604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.152614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.152799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.152809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.152979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.153012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.153326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.153357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.153640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.153650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.153896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.153927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.154199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.154231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.154490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.154501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.154672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.154681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.155234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.155265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.155557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.155567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.155806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.155815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.156057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.156311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.156339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.156597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.156650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.156915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.156946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.157129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.157161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.157344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.157376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.157581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.157591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.157829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.157860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.158092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.158125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.158322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.158359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.158610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.158620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.158829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.158838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.159008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.159017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.159231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.159482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.159515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.159804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.159835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.160167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.160199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.160471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.160504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.160790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.160821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.161107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.161139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.161396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.161427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.161601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.161611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.161861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.162160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.634 [2024-11-06 12:38:38.162198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.634 qpair failed and we were unable to recover it. 00:32:06.634 [2024-11-06 12:38:38.162462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.162472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.162698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.162708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.162889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.163175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.163207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.163497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.163531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.163813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.163844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.164075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.164382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.164414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.164719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.164752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.165020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.165052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.165324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.165356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.165611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.165621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.165831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.165840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.166001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.166154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.166186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.166453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.166503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.166788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.166821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.167091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.167124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.167329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.167370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.167613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.167623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.167865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.167874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.168058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.168067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.168308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.168340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.168642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.168853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.169021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.169272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.169304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.169591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.169625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.169842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.169874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.170146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.170178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.170478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.170488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.170669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.170903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.170912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.171074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.171106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.171336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.171368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.171716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.171750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.172024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.172056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.172268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.172301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.172611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.172652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.172897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.172909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.173117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.173127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.173360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.173369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.173545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.173555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.173750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.174068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.174100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.174326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.174357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.174644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.174677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.174967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.174999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.635 [2024-11-06 12:38:38.175255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.635 [2024-11-06 12:38:38.175287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.635 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.175490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.175712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.175744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.175900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.176103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.176135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.176423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.176456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.176676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.176709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.176962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.176994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.177214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.177247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.177506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.177539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.177807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.177817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.178040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.178050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.178310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.178318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.178480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.178489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.178760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.178770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.179008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.179018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.179232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.179264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.179607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.179836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.179845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.180000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.180009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.180143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.180152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.180395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.180427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.180713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.180747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.181028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.181060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.181310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.181342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.181600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.181840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.181850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.182069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.182101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.182381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.182413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.182732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.182743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.182910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.182920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.183166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.183203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.183480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.183515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.183812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.183822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.184035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.184045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.184197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.184206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.184470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.184503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.184767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.184800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.185093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.185103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.185278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.185514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.185525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.185766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.185798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.186086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.186118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.186401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.186434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.186727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.186761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.187040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.187073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.187293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.187302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.187404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.636 [2024-11-06 12:38:38.187425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.636 qpair failed and we were unable to recover it. 00:32:06.636 [2024-11-06 12:38:38.187674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.187685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.187932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.187941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.188209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.188241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.188514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.188548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.188756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.188793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.188943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.188952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.189104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.189114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.189274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.189284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.189495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.189505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.189643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.189653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.189916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.189949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.190256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.190289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.190550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.190559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.190716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.190749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.191009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.191041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.191347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.191379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.191703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.191986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.192018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.192242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.192506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.192540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.192731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.192741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.192991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.193023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.193305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.193337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.193568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.193609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.193888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.193920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.194206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.194238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.194496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.194530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.194765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.194796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.194998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.195019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.195266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.195299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.195550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.195560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.195797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.195952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.195985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.196134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.196166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.196454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.196532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.196868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.196901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.197206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.197238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.197450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.197494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.197667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.197917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.197927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.198192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.198366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.198375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.198557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.198568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.198725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.198735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.198999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.199344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.199376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.199645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.199686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.199931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.199941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.200175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.637 [2024-11-06 12:38:38.200185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.637 qpair failed and we were unable to recover it. 00:32:06.637 [2024-11-06 12:38:38.200425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.200435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.200613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.200624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.200861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.200871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.201120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.201130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.201376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.201387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.201652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.201662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.201766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.201776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.201945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.201955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.202166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.202176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.202410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.202419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.638 [2024-11-06 12:38:38.202612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.638 [2024-11-06 12:38:38.202622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.638 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.202889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.202899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.203036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.203046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.203280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.203290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.203586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.203598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.203825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.203835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.204048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.204058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.204300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.204310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.204588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.204803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.204813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.205909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.205920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.206147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.206157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.206403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.206667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.206678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.206906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.206916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.207079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.207089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.207249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.207259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.207414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.207446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.207729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.207763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.208017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.208050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.208361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.208393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.208658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.208692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.208937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.914 [2024-11-06 12:38:38.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.914 qpair failed and we were unable to recover it. 00:32:06.914 [2024-11-06 12:38:38.209285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.209317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.209584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.209618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.209821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.209853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.210177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.210209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.210480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.210514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.210776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.210807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.211103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.211135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.211273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.211304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.211496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.211507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.211674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.211705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.211990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.212022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.212230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.212261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.212553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.212586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.212874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.212906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.213197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.213229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.213495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.213528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.213795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.213806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.214055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.214065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.214228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.214237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.214483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.214493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.214752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.215078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.215110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.215394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.215436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.215582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.215592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.215837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.215868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.216159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.216190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.216471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.216481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.216730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.216762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.217049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.217081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.217402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.217588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.217599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.217875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.218033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.218043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.218193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.218202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.218353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.218362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.218579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.218613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.915 qpair failed and we were unable to recover it. 00:32:06.915 [2024-11-06 12:38:38.218895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.915 [2024-11-06 12:38:38.218926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.219185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.219217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.219473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.219483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.219735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.219766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.220067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.220372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.220404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.220604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.220708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.220718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.220859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.220869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.221128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.221160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.221306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.221337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.221576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.221609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.221789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.221798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.221985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.222159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.222169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.222380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.222390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.222542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.222552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.222796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.222827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.223136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.223168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.223378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.223409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.223630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.223669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.223924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.224197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.224206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.224345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.224438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.224450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.224680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.225037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.225074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.225366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.225399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.225699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.225734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.225927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.225937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.226159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.226191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.226497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.226506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.226597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.226606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.226846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.226856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.227143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.227175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.227399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.227431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.916 qpair failed and we were unable to recover it. 00:32:06.916 [2024-11-06 12:38:38.227722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.916 [2024-11-06 12:38:38.227732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.227943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.227953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.228194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.228203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.228485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.228663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.228673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.228843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.228874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.229116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.229147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.229406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.229438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.229578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.229619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.229860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.229870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.230042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.230052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.230266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.230276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.230482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.230516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.230730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.230763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.230889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.230919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.231201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.231233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.231541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.231837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.231847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.232889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.232898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.233114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.233152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.233299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.233330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.233542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.233574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.233801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.234139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.234171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.234443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.234484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.234589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.234598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.234705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.234715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.234953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.234962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.235151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.235160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.235519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.235730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.235762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.236033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.236065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.236382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.236725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.236758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.917 qpair failed and we were unable to recover it. 00:32:06.917 [2024-11-06 12:38:38.237023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.917 [2024-11-06 12:38:38.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.237298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.237331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.237650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.237683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.237948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.238241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.238273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.238526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.238841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.238873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.239063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.239096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.239328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.239358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.239627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.239638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.239964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.239996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.240277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.240309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.240604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.240638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.240914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.240923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.241072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.241104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.241391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.241423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.241744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.241778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.242027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.242036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.242250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.242259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.242416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.242448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.242749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.242784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.243042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.243073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.243386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.243417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.243622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.243632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.243791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.243822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.244010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.244048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.244352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.244383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.244678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.244712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.244997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.245028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.245321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.245353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.245588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.918 [2024-11-06 12:38:38.245622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.918 qpair failed and we were unable to recover it. 00:32:06.918 [2024-11-06 12:38:38.245891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.245900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.246072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.246103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.246338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.246654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.246688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.246979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.247016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.247229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.247239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.247482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.247492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.247730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.247740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.248009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.248018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.248228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.248237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.248479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.248488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.249007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.249040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.249251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.249282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.249581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.249591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.249758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.249768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.250008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.250017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.250187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.250197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.250431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.250441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.250614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.250624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.250843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.250876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.251114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.251147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.251356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.251546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.251556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.251804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.251835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.252045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.252077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.252310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.252544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.252577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.252873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.252905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.253190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.253222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.253519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.253564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.253751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.253761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.253932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.253963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.254209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.254240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.254577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.254616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.254952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.254984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.255277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.919 [2024-11-06 12:38:38.255309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.919 qpair failed and we were unable to recover it. 00:32:06.919 [2024-11-06 12:38:38.255632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.255665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.255967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.255999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.256279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.256311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.256607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.256641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.256920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.256929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.257100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.257109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.257295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.257305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.257398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.257408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.257828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.257907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.258237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.258249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.258504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.258516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.258757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.258769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.258911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.258921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.259154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.259187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.259472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.259507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.259719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.259729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.259899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.259931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.260247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.260278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.260588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.260622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.260925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.260958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.261273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.261306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.261594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.261629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.261913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.261924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.262087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.262096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.262349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.262605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.262616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.262810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.262841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.263134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.263169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.263391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.263425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.263637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.263647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.263818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.263852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.264124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.264159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.264403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.264434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.264642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.264676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.264944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.264979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.265184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.265218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.265543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.265578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.265753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.265767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.920 qpair failed and we were unable to recover it. 00:32:06.920 [2024-11-06 12:38:38.265956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.920 [2024-11-06 12:38:38.265987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.266225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.266258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.266486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.266522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.266738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.266771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.267057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.267305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.267339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.267537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.267571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.267795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.267830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.268151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.268183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.268397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.268431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.268767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.268778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.268969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.268979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.269212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.269244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.269477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.269512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.269718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.269750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.269944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.269956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.270256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.270267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.270573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.270609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.270854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.270886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.271103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.271285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.271296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.271537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.271549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.271770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.271781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.271970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.271980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.272252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.272285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.272643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.272680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.272984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.273018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.273230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.273264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.273536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.273570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.273804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.273814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.274044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.274054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.274310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.274614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.274648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.274854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.274865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.275046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.275080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.275372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.275405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.275631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.275643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.921 [2024-11-06 12:38:38.275805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.921 [2024-11-06 12:38:38.275817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.921 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.275943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.275976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.276304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.276344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.276691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.276726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.276975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.277009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.277355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.277388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.277617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.277842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.277876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.278151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.278161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.278358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.278369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.278560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.278572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.278872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.278905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.279183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.279216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.279484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.279519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.279827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.279855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.279993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.280028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.280243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.280277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.280604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.280650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.280954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.280964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.281241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.281251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.281527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.281539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.281648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.281658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.281875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.281908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.282238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.282272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.282517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.282795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.282827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.283061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.283072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.283310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.283342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.283640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.283676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.284095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.284174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.284814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.284851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.285202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.285241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.285490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.285525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.285692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.285730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.285894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.285905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.922 qpair failed and we were unable to recover it. 00:32:06.922 [2024-11-06 12:38:38.286105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.922 [2024-11-06 12:38:38.286139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.286402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.286434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.286679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.286690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.286912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.286945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.287160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.287194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.287385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.287419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.287696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.287730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.288014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.288047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.288269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.288302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.288571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.288607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.288888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.288923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.289214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.289225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.289531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.289544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.289781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.289793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.290011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.290022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.290217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.290228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.290484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.290496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.290671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.290682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.290823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.290858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.291136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.291168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.291500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.291535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.291751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.291762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.291943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.291978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.292302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.292335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.292578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.292614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.292820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.292854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.293131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.293143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.293367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.923 [2024-11-06 12:38:38.293379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.923 qpair failed and we were unable to recover it. 00:32:06.923 [2024-11-06 12:38:38.293555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.293567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.293820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.293854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.294057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.294090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.294251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.294285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.294525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.294560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.294771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.294809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.295091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.295102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.295336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.295347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.295621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.295633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.295871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.295882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.296073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.296084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.296357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.296368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.296675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.296688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.296942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.296954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.297098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.297110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.297369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.297402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.297627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.297663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.297910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.297945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.298164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.298175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.298404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.298436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.298740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.298774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.299041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.299052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.299247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.299258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.299509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.299520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.299769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.299803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.300014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.300048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.300294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.300328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.300478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.300515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.300782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.300814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.301077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.301088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.301180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.301192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.301355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.301366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.301541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.924 [2024-11-06 12:38:38.301552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.924 qpair failed and we were unable to recover it. 00:32:06.924 [2024-11-06 12:38:38.301717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.302002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.302014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.302266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.302278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.302455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.302473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.302646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.302657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.302826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.302859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.303077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.303110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.303390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.303722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.303915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.303926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.304180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.304213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.304508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.304543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.304829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.304869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.305088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.305421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.305454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.305760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.305797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.306144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.306178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.306514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.306552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.306664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.306674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.306829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.306841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.307029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.307040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.307281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.307293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.307519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.307796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.307920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.307931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.308010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.308021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.308253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.308263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.308441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.308452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.308702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.308713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.308881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.308893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.309173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.309205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.309414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.309449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.309697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.309729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.309966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.310001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.310273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.310285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.925 qpair failed and we were unable to recover it. 00:32:06.925 [2024-11-06 12:38:38.310388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.925 [2024-11-06 12:38:38.310398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.310573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.310607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.310818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.310829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.311030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.311340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.311375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.311607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.311912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.311948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.312271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.312300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.312494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.312507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.312684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.312694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.312875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.312886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.312999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.313011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.313255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.313267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.313433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.313444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.313698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.313711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.313872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.313884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.314041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.314053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.314232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.314246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.314514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.314527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.314642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.314652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.314829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.314863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.315152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.315185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.315412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.315446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.315682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.315692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.315921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.315956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.316286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.316320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.316596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.316631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.316938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.316948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.317145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.317157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.317307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.317319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.317506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.317517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.317798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.317831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.317980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.318016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.318314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.318348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.318626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.318663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.318879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.318913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.926 [2024-11-06 12:38:38.319174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.926 [2024-11-06 12:38:38.319186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.926 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.319416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.319449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.319674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.319708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.319974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.319986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.320173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.320184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.320435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.320480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.320695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.320730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.320901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.320935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.321257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.321434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.321446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.321727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.321738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.321818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.321829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.322041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.322076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.322297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.322330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.322517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.322553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.322698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.322709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.322937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.322969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.323249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.323284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.323581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.323617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.323913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.323948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.324170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.324204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.324539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.324580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.324829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.324863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.325146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.325179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.325428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.325473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.325774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.325810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.325962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.325994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.326895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.326928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.927 [2024-11-06 12:38:38.327171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.927 [2024-11-06 12:38:38.327205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.927 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.327416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.327448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.327663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.327696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.327948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.327983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.328218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.328253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.328404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.328437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.328723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.328757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.329083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.329246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.329256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.329484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.329841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.329874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.330175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.330210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.330508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.330545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.330745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.330756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.330868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.330901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.331109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.331143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.331440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.331485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.331720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.331731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.331921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.331932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.332047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.332058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.332245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.332333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.332358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.332595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.332631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.332869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.332903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.333233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.333255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.333512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.333524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.333688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.333700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.333925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.333959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.334268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.334309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.334542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.334576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.334784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.334795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.334959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.334971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.928 [2024-11-06 12:38:38.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.928 [2024-11-06 12:38:38.335090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.928 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.335272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.335283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.335558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.335592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.335806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.335841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.335967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.335979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.336145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.336156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.336330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.336603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.336638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.336874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.336885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.337046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.337056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.337251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.337284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.337558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.337710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.337722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.338070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.338272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.338306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.338506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.338542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.338734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.338745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.338864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.338875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.339067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.339243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.339491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.339526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.339744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.339778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.339926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.929 [2024-11-06 12:38:38.339960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.929 qpair failed and we were unable to recover it. 00:32:06.929 [2024-11-06 12:38:38.340133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.340166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.340476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.340512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.340704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.340714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.340905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.340938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.341234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.341495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.341530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.341767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.341779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.342003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.342037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.342386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.342420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.342579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.342606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.342842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.343069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.343081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.343282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.343294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.343525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.343539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.343715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.343725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.343972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.343984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.344164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.344176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.344440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.344487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.344704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.345038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.345072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.345322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.345357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.345671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.345705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.345918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.345954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.346243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.346254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.346438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.346449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.346536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.346546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.346666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.346679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.346921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.346955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.347103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.347137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.347361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.347396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.930 [2024-11-06 12:38:38.347650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.930 [2024-11-06 12:38:38.347684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.930 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.347838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.347849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.347960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.347970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.348184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.348219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.348373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.348407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.348761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.348801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.349089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.349100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.349395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.349646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.349896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.349907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.350056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.350091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.350315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.350349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.350709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.350744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.350912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.350944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.351188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.351220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.351434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.351496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.351711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.351743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.351969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.352002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.352211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.352233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.352455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.352471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.352734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.352746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.352861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.352871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.353054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.931 [2024-11-06 12:38:38.353086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.931 qpair failed and we were unable to recover it. 00:32:06.931 [2024-11-06 12:38:38.353389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.353432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.353683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.353717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.353866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.353900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.354186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.354197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.354353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.354364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.354542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.354553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.354829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.354840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.355916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.355928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.356077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.356112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.356314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.356346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.356681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.356715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.356989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.357023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.357274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.357306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.357575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.357608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.357852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.357886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.358025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.358059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.358264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.358295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.358604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.358641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.358954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.358986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.359253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.359286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.359534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.359571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.359780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.359791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.360001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.360306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.360340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.360556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.360593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.360730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.360762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.361011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.932 [2024-11-06 12:38:38.361043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.932 qpair failed and we were unable to recover it. 00:32:06.932 [2024-11-06 12:38:38.361324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.361553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.361587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.361764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.362013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.362024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.362216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.362476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.362510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.362730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.362765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.362934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.362967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.363117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.363155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.363428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.363438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.363596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.363607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.363828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.363840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.364160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.364192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.364414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.364448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.364666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.364699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.364822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.364947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.364958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.365131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.365383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.365417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.365722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.365759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.365964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.365975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.366175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.366208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.366434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.366482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.366643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.366675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.366841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.366874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.367141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.367174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.933 [2024-11-06 12:38:38.367377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.933 [2024-11-06 12:38:38.367410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.933 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.367586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.367624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.367789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.367799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.367978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.367990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.368080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.368090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.368239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.368250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.370022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.370047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.370324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.370349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.370529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.370541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.370664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.370676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.370899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.370911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.371207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.371217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.371388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.371422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.371682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.371886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.371896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.372824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.372855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.373940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.934 [2024-11-06 12:38:38.373950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.934 qpair failed and we were unable to recover it. 00:32:06.934 [2024-11-06 12:38:38.374074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.374106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.374294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.374327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.374557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.374592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.374786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.374798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.374887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.374899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.374999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.375125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.375304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.375385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.375638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.375822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.375857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.376074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.376109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.376377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.376410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.376569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.376606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.376829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.376862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.377144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.377178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.377391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.377402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.377485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.377498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.377721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.377731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.377946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.377957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.378132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.378167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.378303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.378336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.378564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.378600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.378818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.378852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.935 [2024-11-06 12:38:38.379118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.935 [2024-11-06 12:38:38.379151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.935 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.379413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.379424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.379592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.379604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.379755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.379767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.380005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.380016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.380270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.380281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.380445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.380456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.380580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.380617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.380925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.380957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.381167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.381179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.381327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.381528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.381743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.381776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.381985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.382020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.382216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.382248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.382524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.382535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.382707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.382719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.382878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.382889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.383048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.383059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.383377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.383410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.383574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.383608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.383803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.383836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.383969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.384003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.384197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.384231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.384504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.384539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.384750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.384785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.385092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.385303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.385314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.385537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.385549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.936 [2024-11-06 12:38:38.385640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.936 [2024-11-06 12:38:38.385651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.936 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.385908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.385941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.386283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.386316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.386597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.386635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.386928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.386963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.387352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.387362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.387557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.387569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.387794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.388097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.388175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.388508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.388549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.388782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.388817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.389109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.389142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.389358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.389391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.389740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.389785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.390028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.390044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.390325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.390337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.390581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.390829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.390840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.390945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.390956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.391230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.391264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.391589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.391764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.391798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.392129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.392394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.392405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.392629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.392640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.392861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.392871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.393039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.393081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.393299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.393333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.393562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.393762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.393773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.393940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.937 [2024-11-06 12:38:38.393952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.937 qpair failed and we were unable to recover it. 00:32:06.937 [2024-11-06 12:38:38.394073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.394251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.394286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.394635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.394671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.394876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.394909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.395164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.395174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.395405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.395438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.395668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.395702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.395990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.396024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.396325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.396335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.396501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.396511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.396770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.396804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.397043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.397053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.397313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.397346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.397560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.397594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.397965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.397999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.398239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.398271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.398519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.398552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.398851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.398890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.399175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.399207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.399495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.399529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.399763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.399798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.400029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.400062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.400217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.400249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.400525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.400559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.400699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.400732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.400948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.400980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.401235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.401245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.401427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.401437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.401726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.401738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.401921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.401931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.402019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.402030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.402289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.402336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.402569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.402604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.938 qpair failed and we were unable to recover it. 00:32:06.938 [2024-11-06 12:38:38.402814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.938 [2024-11-06 12:38:38.402846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.403096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.403106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.403352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.403362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.403653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.403688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.403958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.403991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.404140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.404172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.404501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.404535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.404701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.404734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.404960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.404993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.405315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.405325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.405524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.405558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.405709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.405740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.406008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.406040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.406252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.406285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.406575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.406610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.406849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.407030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.407064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.407364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.407396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.407537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.407571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.407841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.407875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.408093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.408355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.408389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.408613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.408647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.408806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.408838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.409142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.409180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.409335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.409367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.409595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.409895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.409929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.410464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.410475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.410779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.410965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.410976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.411179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.411189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.939 [2024-11-06 12:38:38.411353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.939 [2024-11-06 12:38:38.411363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.939 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.411622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.411656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.411883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.411893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.412009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.412019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.412297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.412330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.412632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.412666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.412829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.412839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.413027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.413059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.413376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.413674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.413907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.413940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.414162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.414172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.414417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.414427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.414550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.414563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.414687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.414940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.414965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.415215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.415248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.415519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.415554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.415839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.415872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.416040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.416073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.416382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.416415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.416711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.416744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.417012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.417045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.417352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.417680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.417714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.417960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.417994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.418206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.418239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.418415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.418426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.418605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.418639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.418884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.418918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.940 [2024-11-06 12:38:38.419143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.940 [2024-11-06 12:38:38.419175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.940 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.419412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.419451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.419617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.419650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.419958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.419968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.420150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.420161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.420306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.420441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.420486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.420697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.420729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.421012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.421046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.421386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.421688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.421723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.422005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.422038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.422173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.422183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.422451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.422507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.422776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.422808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.423082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.423115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.423414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.423446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.423670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.423704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.423920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.423953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.424316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.424348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.424501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.424534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.424731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.424764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.424985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.424996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.425096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.425105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.425339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.425372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.425595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.425629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.425870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.425903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.426201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.426232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.426519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.426553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.426754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.426787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.427013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.427045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.427339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.427349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.427509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.427520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.427799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.427833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.428125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.428158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.428480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.428513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.428759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.428791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.428957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.941 [2024-11-06 12:38:38.428991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.941 qpair failed and we were unable to recover it. 00:32:06.941 [2024-11-06 12:38:38.429291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.429301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.429536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.429570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.429795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.429829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.430129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.430166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.430368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.430378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.430490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.430661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.430671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.430873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.430907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.431061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.431093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.431390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.431423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.431624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.431659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.431920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.431930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.432081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.432311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.432344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.432638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.432880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.432912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.433188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.433453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.433471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.433673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.433792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.433802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.434051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.434091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.434357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.434390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.434704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.434738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.435041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.435074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.435376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.435407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.435630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.435664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.435958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.436221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.436232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.436484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.436518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.436753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.436785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.436991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.437002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.437199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.437232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.437528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.437562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.437754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.437787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.437992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.942 qpair failed and we were unable to recover it. 00:32:06.942 [2024-11-06 12:38:38.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.942 [2024-11-06 12:38:38.438185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.438383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.438393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.438648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.438683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.439018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.439049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.439339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.439371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.439503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.439538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.439801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.440024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.440057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.440326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.440338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.440515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.440549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.440761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.440794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.441018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.441050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.441277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.441287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.441539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.441574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.441773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.441804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.441943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.441971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.442148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.442158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.442325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.442357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.442572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.442607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.442830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.442864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.443133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.443166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.443436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.443484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.443616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.443630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.443834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.443866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.444092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.444124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.444277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.444311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.444538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.444551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.444776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.444788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.444919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.444930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.445189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.445200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.445514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.445549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.943 qpair failed and we were unable to recover it. 00:32:06.943 [2024-11-06 12:38:38.445763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.943 [2024-11-06 12:38:38.445796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.446032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.446041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.446305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.446315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.446486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.446497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.446611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.446624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.446846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.446857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.447027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.447037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.447312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.447345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.447619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.447653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.447878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.448121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.448154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.448453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.448473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.448621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.448631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.448791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.448824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.448978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.449011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.449233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.449265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.449545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.449556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.449694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.449733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.450029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.450285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.450317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.450522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.450556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.450821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.450853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.451126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.451158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.451428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.451470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.451707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.451740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.451940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.451972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.452349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.452381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.452680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.452714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.452882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.452914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.453096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.453106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.453414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.453682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.453716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.453927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.453937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.454144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.454178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.454392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.454426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.944 qpair failed and we were unable to recover it. 00:32:06.944 [2024-11-06 12:38:38.454640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.944 [2024-11-06 12:38:38.454675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.454815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.454848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.455086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.455118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.455340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.455353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.455442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.455455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.455784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.455795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.455898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.455908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.456232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.456352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.456363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.456605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.456617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.456785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.456796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.456985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.456995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.457180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.457190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.457433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.457443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.457712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.457835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.457845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.457960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.457970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.458235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.458246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.458395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.458405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.458603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.458614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.458874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.458885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.459110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.459122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.459342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.459355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.459615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.459819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.459831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.460114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.460125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.460636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.460648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.460818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.460829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.460996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.461007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.945 [2024-11-06 12:38:38.461304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.945 [2024-11-06 12:38:38.461314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.945 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.461598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.461609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.461775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.461785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.461954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.461963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.462154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.462165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.462428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.462647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.462659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.462834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.462844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.463034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.463045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.463335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.463345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.463549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.463560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.463694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.463899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.463909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.464079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.464090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.464354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.464365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.464532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.464542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.464736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.464909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.464920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.465085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.465095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.465317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.465349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.465529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.465542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.465690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.465701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.465879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.465890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.466038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.466049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.466329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.466602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.466614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.466785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.466795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.466975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.467144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.467154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.467351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.467362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.467601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.467613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.467785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.467795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.467883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.467899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.468052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.468062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.468181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.468191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.946 qpair failed and we were unable to recover it. 00:32:06.946 [2024-11-06 12:38:38.468357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.946 [2024-11-06 12:38:38.468367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.468482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.468493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.468661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.468673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.468790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.468800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.469034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.469323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.469336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.469501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.469512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.469679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.469689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.469917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.469928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.470120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.470130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.470286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.470296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.470557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.470568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.470741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.470751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.470997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.471007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.471286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.471295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.471465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.471475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.471675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.471686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.471981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.471991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.472210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.472220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.472375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.472385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.472591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.472602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.472776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.472786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.472953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.472964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.473129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.473139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.473394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.473404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.473536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.473743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.473754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.473967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.473978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.474153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.474398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.474497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.474743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.474871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.474990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.475104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.475384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.475615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.475788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.475971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.475981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.947 [2024-11-06 12:38:38.476082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.947 [2024-11-06 12:38:38.476093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.947 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.476256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.476267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.476496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.476508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.476661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.476671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.476899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.476909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.477173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.477405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.477415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.477586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.477597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.477767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.477779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.477907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.477918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.478924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.478935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.479029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.479039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.479263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.479273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.479516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.479527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.479705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.479715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.479876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.479886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.480989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.480999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.481155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.481165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.481403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.481414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.481519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.481529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.481780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.481790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.481895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.481905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.482187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.482197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.482434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.482445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.482627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.482638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.482745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.482756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.948 qpair failed and we were unable to recover it. 00:32:06.948 [2024-11-06 12:38:38.482925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.948 [2024-11-06 12:38:38.482935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.483104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.483114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.483341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.483351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.483521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.483532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.483748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.483758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.483915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.483925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.484110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.484120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.484208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.484218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.484463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.484473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.484587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.484596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.484822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.484833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.485003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.485012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.485340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.485552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.485709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.485718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.485943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.486140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.486149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.486240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.486251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.486412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.486423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.486612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.486850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.486860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.487016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.487027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.487120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.487129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.487357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.487368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.487626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.487866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.487877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.488113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.488123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.488376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.488389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.488586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.488597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.488753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.488767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.488932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.488943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.489136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.489147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.489355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.489532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.489542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.489778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.489787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.489882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.489892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.490087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.490096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.490338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.949 [2024-11-06 12:38:38.490348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.949 qpair failed and we were unable to recover it. 00:32:06.949 [2024-11-06 12:38:38.490590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.490601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.490702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.490712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.490806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.490816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.490919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.490929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.491106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.491116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.491367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.491614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.491625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.491774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.491784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.492030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.492039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.492211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.492220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.492378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.492387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.492658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.492668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.492826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.492836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.493940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.493949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.494034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.494044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.494312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.494321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.494503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.494514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.494705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.494850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.494860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.495978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.495988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.950 [2024-11-06 12:38:38.496200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.950 [2024-11-06 12:38:38.496209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.950 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.496373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.496386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.496485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.496559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.496568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.496802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.496812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.497040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.497050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.497195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.497205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.497369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.497379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.497589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.497599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.497813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.497824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.498119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.498359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.498368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.498621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.498631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.498788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.498953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.498963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.499057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.499067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.499292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.499302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.499412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.499659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.499670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.499885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.499895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.500079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.500090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.500296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.500307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.500606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.500617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.500847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.500858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.501014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.501024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.501200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.501210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.501474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.501485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.501752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.501761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.501921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.501930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.502157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.502167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.502263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.502273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.502443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.502453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.502686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.502933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.502943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.503189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.503200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.503379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.503638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.503831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.503841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.504056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.504066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.504225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.504236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.504429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.504439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.504697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.504711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.504866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.504876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.951 [2024-11-06 12:38:38.505042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.951 [2024-11-06 12:38:38.505051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.951 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.505320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.505329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.505507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.505518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.505747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.505757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.505941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.505951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.506979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.506989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.507173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.507183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.507368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.507379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.507632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.507894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.507904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.508168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.508413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.508424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.508586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.508597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.508754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.508765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.508937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.508948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.509974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.509984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.510919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.510930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.511166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.511176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.511380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.511391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.511578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.511589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.511771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.511782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.952 [2024-11-06 12:38:38.511939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.952 [2024-11-06 12:38:38.511950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.952 qpair failed and we were unable to recover it. 00:32:06.953 [2024-11-06 12:38:38.512202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.953 [2024-11-06 12:38:38.512212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.953 qpair failed and we were unable to recover it. 00:32:06.953 [2024-11-06 12:38:38.512379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.953 [2024-11-06 12:38:38.512391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.953 qpair failed and we were unable to recover it. 00:32:06.953 [2024-11-06 12:38:38.512626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.953 [2024-11-06 12:38:38.512638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.953 qpair failed and we were unable to recover it. 00:32:06.953 [2024-11-06 12:38:38.512805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:06.953 [2024-11-06 12:38:38.512815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:06.953 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.512980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.512991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.513170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.513181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.513343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.513355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.513615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.513626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.513840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.513851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.513928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.513938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.514854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.514865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.515016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.515027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.515294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.515304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.515543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.515556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.515654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.238 [2024-11-06 12:38:38.515664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.238 qpair failed and we were unable to recover it. 00:32:07.238 [2024-11-06 12:38:38.515832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.515844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.515988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.515998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.516148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.516160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.516442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.516453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.516736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.516747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.516927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.516938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.517221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.517232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.517392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.517403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.517589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.517601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.517756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.517767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.517979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.517990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.518149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.518160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.518337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.518349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.518526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.518537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.518677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.518687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.518899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.518909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.519939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.519949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.520976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.521050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.521061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.521312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.521323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.521585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.521597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.521840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.521851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.522967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.522977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.523250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.523260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.523436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.523445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.523530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.523542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.523685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.523696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.523930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.523941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.524195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.524206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.524389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.524401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.524522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.524534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.524628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.524639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.524866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.524877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.525975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.526248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.526259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.526493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.526504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.526650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.526661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.526825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.526834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.526933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.526943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.527168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.527363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.527376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.527593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.527605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.527693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.527704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.527939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.527949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.528155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.528165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.528316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.528325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.528545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.239 [2024-11-06 12:38:38.528557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.239 qpair failed and we were unable to recover it. 00:32:07.239 [2024-11-06 12:38:38.528647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.528657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.528930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.529126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.529137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.529288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.529299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.529495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.529506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.529663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.529673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.529893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.529903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.530108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.530118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.530372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.530383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.530644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.530655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.530898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.530908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.531924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.531935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.532196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.532433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.532443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.532620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.532631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.532818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.532829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.532938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.532948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.533170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.533181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.533337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.533349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.533523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.533535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.533754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.533765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.534881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.534892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.535058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.535068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.535300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.535473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.535484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.535709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.535720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.535943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.535954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.536050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.536061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.536280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.536417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.536428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.536689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.536701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.536858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.536868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.537843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.537853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.538103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.538114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.538333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.538343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.538495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.538506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.538738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.538748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.538905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.538916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.539100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.539110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.539310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.539544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.539556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.539765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.539776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.539872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.539882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.540032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.540296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.540308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.540452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.240 [2024-11-06 12:38:38.540474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.240 qpair failed and we were unable to recover it. 00:32:07.240 [2024-11-06 12:38:38.540549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.540561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.540753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.540764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.540940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.540951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.541145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.541156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.541389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.541548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.541559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.541823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.541833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.541988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.541998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.542158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.542168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.542323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.542334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.542479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.542490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.542718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.542729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.542959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.542972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.543153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.543164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.543411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.543422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.543665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.543677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.543919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.543931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.544199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.544210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.544443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.544454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.544593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.544605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.544792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.544804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.544961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.544973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.545937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.545947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.546948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.546959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.547913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.547925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.548968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.548980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.549902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.549914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.241 qpair failed and we were unable to recover it. 00:32:07.241 [2024-11-06 12:38:38.550141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.241 [2024-11-06 12:38:38.550153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.550929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.550941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.551902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.551914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.552977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.552989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.553930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.554972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.554983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.555959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.555969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.556827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.556839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.557937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.557948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.558910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.558922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.559102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.559113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.559221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.559316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.559327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.559418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.242 [2024-11-06 12:38:38.559429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.242 qpair failed and we were unable to recover it. 00:32:07.242 [2024-11-06 12:38:38.559613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.559769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.559781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.559997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.560961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.560972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.561954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.562978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.563919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.563930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.565922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.565932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.566949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.566959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.567855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.567866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.568910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.568921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.569101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.569259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.569270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.569427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.243 [2024-11-06 12:38:38.569438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.243 qpair failed and we were unable to recover it. 00:32:07.243 [2024-11-06 12:38:38.569656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.569666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.569818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.569829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.569916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.570975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.570987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.571936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.571948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.572865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.572876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.573981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.573991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.574910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.574920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.575902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.575912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.576779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.576791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.577933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.577943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.578880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.578891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.579068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.579078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.579165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.579175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.579239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.579249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.579339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.244 [2024-11-06 12:38:38.579429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.244 [2024-11-06 12:38:38.579439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.244 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.579552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.579563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.579746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.579829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.579839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.579930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.579941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.580915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.580998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.581971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.581982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.582790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.582800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.583980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.583990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.584937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.584948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.585955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.585965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.586875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.586886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.587955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.587966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.588950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.588961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.589045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.589056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.589292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.589304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.589383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.245 [2024-11-06 12:38:38.589579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.245 [2024-11-06 12:38:38.589591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.245 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.589674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.589685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.589892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.589903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.590919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.590930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.591942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.591953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.592917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.593801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.593811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.594864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.594874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.595984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.595993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.597127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.597138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.597276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.597286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.597367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.597377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.597562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.597574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.597761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.597771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.598824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.598990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.599000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.246 qpair failed and we were unable to recover it. 00:32:07.246 [2024-11-06 12:38:38.599158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.246 [2024-11-06 12:38:38.599169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.599906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.599916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.600926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.600936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.601864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.601874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.602910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.602920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.603965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.604910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.604919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.605900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.605910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.606905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.606917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.607914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.608975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.247 [2024-11-06 12:38:38.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.247 qpair failed and we were unable to recover it. 00:32:07.247 [2024-11-06 12:38:38.609065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.609897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.609909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.610933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.610944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.611966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.611977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.612823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.612834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.613925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.613935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.614939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.614949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.615849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.615994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.616956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.616966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.617921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.618058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.618199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.618301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.618387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.248 [2024-11-06 12:38:38.618469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.248 qpair failed and we were unable to recover it. 00:32:07.248 [2024-11-06 12:38:38.618554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.618564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.618716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.618873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.618882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.619845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.619879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.620919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.620931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.621924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.621997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.622917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.622926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.623963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.623972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.624915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.624924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.625916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.626932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.626942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.627881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.627892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.628041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.628051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.628185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.628195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.628330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.628341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.628408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.249 [2024-11-06 12:38:38.628556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.249 [2024-11-06 12:38:38.628571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.249 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.628752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.628762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.628975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.629916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.629925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.630965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.630974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.631953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.631963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.632871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.633815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.633991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.634908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.634917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.635924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.635934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.636927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.636936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.637890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.637900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.638134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.638143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.638230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.638239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.638311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.638321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.250 qpair failed and we were unable to recover it. 00:32:07.250 [2024-11-06 12:38:38.638472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.250 [2024-11-06 12:38:38.638483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.638566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.638575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.638644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.638653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.638733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.638743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.638891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.638901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.638991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.639852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.639992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.640872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.640882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.641860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.642862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.642871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.643968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.643979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.644975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.645894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.645904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.646985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.646995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.647135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.647146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.647249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.647259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.251 [2024-11-06 12:38:38.647325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.251 [2024-11-06 12:38:38.647335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.251 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.647482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.647492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.647571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.647581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.647715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.647725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.647869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.648980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.648991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.649977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.649988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.650979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.650988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.651843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.651990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.652969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.652980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.653833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.653844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.654970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.654980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.655845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.655990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.656879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.656888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.252 [2024-11-06 12:38:38.657037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.252 qpair failed and we were unable to recover it. 00:32:07.252 [2024-11-06 12:38:38.657176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.657186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.657444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.657455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.657680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.657691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.657776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.657787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.657865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.657875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.658947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.659961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.659973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.660839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.661983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.661994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.662245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.662468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.662479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.662635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.662647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.662799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.662811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.662953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.662964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.663142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.663153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.663308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.663319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.663550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.663561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.663785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.663950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.663960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.664910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.664991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.665918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.665929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.666067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.666077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.666338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.666349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.666442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.666453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.666669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.666680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.666891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.666902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.253 [2024-11-06 12:38:38.667858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.253 [2024-11-06 12:38:38.667868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.253 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.667931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.667941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.668872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.668882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.669019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.669029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.669143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.669371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.669404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.669712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.669746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.669947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.669958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.670212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.670475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.670509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.670697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.670707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.670793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.670803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.670957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.670968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.671905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.671915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.672900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.672911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.673935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.673945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.674906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.674920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.675926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.675937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.676968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.676979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.677834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.677844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.678913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.678923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.679077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.679087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.254 qpair failed and we were unable to recover it. 00:32:07.254 [2024-11-06 12:38:38.679292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.254 [2024-11-06 12:38:38.679301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.679384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.679395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.679581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.679785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.679796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.679929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.679940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.680971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.681972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.681981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.682852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.682999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.683935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.683945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.684934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.684999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.685908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.685918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.686823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.686993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.687878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.687888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.688064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.688075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.688244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.688255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.688413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.688423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.255 [2024-11-06 12:38:38.688496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.255 [2024-11-06 12:38:38.688506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.255 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.688593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.688603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.688690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.688701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.688864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.688874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.689853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.689863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.690982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.690992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.691818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.691828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.692913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.692924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.693995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.694080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.694275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.694549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.694720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.694967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.694978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.695962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.695973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.696983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.697855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.697989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.698925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.698935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.699054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.699065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.699150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.256 [2024-11-06 12:38:38.699161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.256 qpair failed and we were unable to recover it. 00:32:07.256 [2024-11-06 12:38:38.699317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.699328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.699578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.699589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.699734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.699745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.699935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.699946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.700923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.700934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.701834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.701846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.702916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.702927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.703866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.703877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.704921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.704933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.705886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.705898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.706929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.706939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.707945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.707956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.708845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.708856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.709932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.709943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.710032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.710046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.710217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.710228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.710314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.710325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.257 [2024-11-06 12:38:38.710397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.257 [2024-11-06 12:38:38.710408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.257 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.710492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.710572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.710583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.710740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.710817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.710826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.710983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.711979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.711988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.712925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.713846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.713991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.714927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.714937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.715952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.715962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.716909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.716919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.717928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.717939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.718897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.718908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.719066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.719076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.719170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.719181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.258 qpair failed and we were unable to recover it. 00:32:07.258 [2024-11-06 12:38:38.719401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.258 [2024-11-06 12:38:38.719411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.719508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.719519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.719603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.719614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.719777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.719788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.719966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.719977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.720929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.720940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.721947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.721957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.722891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.722902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.723839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.723849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.724946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.724956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.725908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.725917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.726859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.726870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.727967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.727976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.728366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.728533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.728699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.728864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.728999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.259 qpair failed and we were unable to recover it. 00:32:07.259 [2024-11-06 12:38:38.729928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.259 [2024-11-06 12:38:38.729938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.730945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.730955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.731758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.731768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.732024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.732034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.732301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.732311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.732467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.732477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.732625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.732635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.732845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.732855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.733921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.733931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.734885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.734895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.735934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.735944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.736100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.736113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.736311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.736322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.736471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.736481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.736666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.736676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.736852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.736862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.737843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.737854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.738907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.738917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.260 [2024-11-06 12:38:38.739881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.260 [2024-11-06 12:38:38.739890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.260 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.740052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.740239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.740270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.740549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.740583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.740693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.740702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.740843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.740853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.741002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.741034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.741306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.741336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.741480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.741512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.741686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.741719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.741835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.741866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.742139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.742148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.742382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.742392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.743598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.743620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.743888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.743899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.744116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.744156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.744295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.744326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.744537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.744571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.744732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.744764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.744895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.744925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.745145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.745177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.745326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.745359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.745578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.745773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.745805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.745969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.746111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.746143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.746330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.746362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.746538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.746758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.746790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.746944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.746975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.747194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.747225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.747404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.747435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.747724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.747757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.747890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.747922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.748894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.748903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.749915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.749999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.750152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.750311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.750570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.750812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.750973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.750982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.751866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.751898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.752007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.752017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.752240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.752273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.752402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.752434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.752708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.752740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.752942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.752952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.753141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.753173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.753361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.753395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.753510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.753543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.753800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.753833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.754056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.754088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.754299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.754331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.754590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.754624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.754844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.754882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.754969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.754978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.755063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.755072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.755233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.261 [2024-11-06 12:38:38.755263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.261 qpair failed and we were unable to recover it. 00:32:07.261 [2024-11-06 12:38:38.755378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.755548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.755580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.755763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.755795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.755970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.755979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.756046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.756055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.756135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.756145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.756190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95530 (9): Bad file descriptor 00:32:07.262 [2024-11-06 12:38:38.756414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.756492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.756770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.756840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.757035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.757107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.757247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.757286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.757525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.757643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.757675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.757795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.758934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.758944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.759970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.759981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.760940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.760972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.761958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.761968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.762947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.762957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.763847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.763857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.764891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.764901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.765050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.765061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.765153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.765162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.765376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.765386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.262 qpair failed and we were unable to recover it. 00:32:07.262 [2024-11-06 12:38:38.765456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.262 [2024-11-06 12:38:38.765469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.765543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.765552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.765704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.765713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.765776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.765786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.765868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.765878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.766921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.766930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.767854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.767999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.768975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.263 qpair failed and we were unable to recover it. 00:32:07.263 [2024-11-06 12:38:38.769724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.263 [2024-11-06 12:38:38.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.769821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.769831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.769967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.769977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.770959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.770969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.771925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.771934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.772982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.773121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.773132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.773221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.773230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.264 [2024-11-06 12:38:38.773385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.264 [2024-11-06 12:38:38.773394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.264 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.773492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.773502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.773646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.773655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.773786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.773796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.773875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.773885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.774941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.774952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.775914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.775924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.776957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.776967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.777962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.777972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.778041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.778050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.778189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.778281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.778290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.778356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.778366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.265 qpair failed and we were unable to recover it. 00:32:07.265 [2024-11-06 12:38:38.778537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.265 [2024-11-06 12:38:38.778547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.778630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.778639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.778792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.778801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.778872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.778882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.778959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.778970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.779979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.779988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.780070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.780080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.780223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.780233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.780390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.780399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.780598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.780609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.780778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.780787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.781961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.781971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.782919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.782929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.783155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.783165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.783306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.783316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.783450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.783463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.783664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.783674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.783881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.783891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.784027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.784037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.266 qpair failed and we were unable to recover it. 00:32:07.266 [2024-11-06 12:38:38.784244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.266 [2024-11-06 12:38:38.784254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.784980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.784990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.785270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.785281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.785497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.785508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.785647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.785658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.785820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.785830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.785989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.785999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.786964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.787862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.787872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.788997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.789007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.789084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.789094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.789369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.789381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.789523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.789534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.789622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.267 [2024-11-06 12:38:38.789632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.267 qpair failed and we were unable to recover it. 00:32:07.267 [2024-11-06 12:38:38.789865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.789875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.789969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.789979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.790964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.790973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.791909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.791919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.792801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.792811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.793944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.793955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.794181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.794192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.794346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.794356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.794504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.794515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.268 qpair failed and we were unable to recover it. 00:32:07.268 [2024-11-06 12:38:38.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.268 [2024-11-06 12:38:38.794666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.794755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.794848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.794857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.794942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.794952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.795836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.796878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.796888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.797910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.798963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.798973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.269 [2024-11-06 12:38:38.799737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.269 qpair failed and we were unable to recover it. 00:32:07.269 [2024-11-06 12:38:38.799942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.799952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.800094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.800104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.800290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.800299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.800528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.800538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.800616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.800626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.800884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.800894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.801932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.801942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.802948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.802958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.803955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.803965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.804931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.804941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.805859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.270 [2024-11-06 12:38:38.805869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.270 qpair failed and we were unable to recover it. 00:32:07.270 [2024-11-06 12:38:38.806015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.806211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.806369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.806517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.806747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.806930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.806939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.807802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.807816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.808934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.808944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.809043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.809054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.809318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.809329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.809492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.809502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.809659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.809669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.809877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.809886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.810836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.810846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.811960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.811970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.271 qpair failed and we were unable to recover it. 00:32:07.271 [2024-11-06 12:38:38.812043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.271 [2024-11-06 12:38:38.812055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.812214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.812224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.812362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.812467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.812477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.812667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.812677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.812857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.812867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.813976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.814850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.814860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.815902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.815913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.816956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.816966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.272 qpair failed and we were unable to recover it. 00:32:07.272 [2024-11-06 12:38:38.817247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.272 [2024-11-06 12:38:38.817258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.817403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.817475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.817486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.817701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.817712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.817812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.817823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.817973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.817984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.818957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.818969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.819045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.819056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.819315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.819327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.819539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.819550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.819626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.819637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.819870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.819881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.820896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.820991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.821940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.821951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.822972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.273 [2024-11-06 12:38:38.823120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.273 [2024-11-06 12:38:38.823130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.273 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.823210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.823220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.823379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.823389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.823577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.823588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.823736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.823745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.823954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.823965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.824765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.824777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.825819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.825830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.826011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.826022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.826187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.826198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.826428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.826440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.826620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.826634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.826828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.826840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.827069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.827081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.827145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.827157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.827307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.827319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.827573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.827584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.827815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.827841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.828001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.828168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.828180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.274 [2024-11-06 12:38:38.828318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.274 [2024-11-06 12:38:38.828328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.274 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.828426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.828438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.828602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.828614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.828765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.828777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.829982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.829993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.830881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.830891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.831963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.831973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.832135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.832146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.832318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.832329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.832493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.832504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.832679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.563 [2024-11-06 12:38:38.832690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.563 qpair failed and we were unable to recover it. 00:32:07.563 [2024-11-06 12:38:38.832754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.832765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.832895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.832905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.833900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.833913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.834837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.834993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.835158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.835343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.835523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.835677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.835892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.835901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.836814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.836824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.837112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.837256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.837510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.837680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.837769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.837995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.838006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.838155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.838166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.838236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.838247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.838337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.838348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.838447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.564 [2024-11-06 12:38:38.838457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.564 qpair failed and we were unable to recover it. 00:32:07.564 [2024-11-06 12:38:38.838555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.838565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.838715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.838726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.838815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.838826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.839800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.839833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.840036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.840046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.840208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.840240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.840439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.840480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.840659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.840692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.840820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.841093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.841126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.841319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.841352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.841601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.841634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.841838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.841870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.842063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.842096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.842343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.842374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.842684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.842940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.842972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.843239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.843272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.843484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.843517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.843805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.843838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.844087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.844097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.844259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.844269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.844491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.844523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.844740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.845059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.845092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.845358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.845524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.845534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.845775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.845789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.845954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.846204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.846236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.846419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.846451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.846610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.846643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.846843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.846876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.565 qpair failed and we were unable to recover it. 00:32:07.565 [2024-11-06 12:38:38.846990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.565 [2024-11-06 12:38:38.847021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.847206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.847238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.847433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.847474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.847668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.847993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.848897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.848906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.849985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.849995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.850824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.851809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.851840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.852036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.852068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.852314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.852323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.852490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.852528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.852828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.852861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.853068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.853078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.853244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.853253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.566 qpair failed and we were unable to recover it. 00:32:07.566 [2024-11-06 12:38:38.853397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.566 [2024-11-06 12:38:38.853406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.853612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.853622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.853777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.853787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.853936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.853945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.854747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.854779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.855852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.855862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.856024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.856057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.856255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.856287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.856482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.856516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.856629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.856661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.856857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.856890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.857141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.857172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.857447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.857541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.857559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.857644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.857653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.857871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.857881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.858014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.858025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.858230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.858264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.858517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.858551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.858691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.858929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.858961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.859196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.859229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.859424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.859455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.567 qpair failed and we were unable to recover it. 00:32:07.567 [2024-11-06 12:38:38.859676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.567 [2024-11-06 12:38:38.859709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.859912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.859944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.860233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.860265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.860517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.860528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.860686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.860695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.860928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.860959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.861166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.861199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.861393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.861425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.861630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.861662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.861874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.861904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.862081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.862090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.862242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.862252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.862514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.862548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.862748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.862780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.862925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.862955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.863945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.863954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.864178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.864318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.864349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.864505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.864702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.864734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.864936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.864968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.865858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.865992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.568 [2024-11-06 12:38:38.866825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.568 [2024-11-06 12:38:38.866834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.568 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.867020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.867052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.867166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.867197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.867376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.867614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.867648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.867855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.867891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.868203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.868429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.868438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.868529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.868744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.868753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.868896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.868906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.869108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.869118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.869206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.869216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.869427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.869469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.869655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.869687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.869944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.869976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.870162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.870193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.870332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.870354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.870585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.870594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.870779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.870940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.871196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.871228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.871432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.871475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.871605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.871638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.871772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.871804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.871989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.871998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.872164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.872197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.872478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.872512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.872657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.872689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.872842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.872875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.873945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.873955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.874115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.874148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.874403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.874436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.874648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.569 [2024-11-06 12:38:38.874681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.569 qpair failed and we were unable to recover it. 00:32:07.569 [2024-11-06 12:38:38.874803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.874835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.875945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.875983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.876249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.876280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.876472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.876482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.876692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.876724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.876836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.876870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.877067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.877101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.877282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.877314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.877620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.877653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.877936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.877969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.878189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.878222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.878483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.878492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.878644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.878654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.878752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.878790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.879043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.879075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.879231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.879265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.879466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.879476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.879633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.879666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.879844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.879875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.880116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.880149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.880323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.880333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.880500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.880509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.880643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.880652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.880879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.880889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.881022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.881196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.881206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.881371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.881403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.881533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.881567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.881827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.881899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.882133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.882160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.882383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.882395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.882614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.570 qpair failed and we were unable to recover it. 00:32:07.570 [2024-11-06 12:38:38.882936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.570 [2024-11-06 12:38:38.882967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.883217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.883226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.883431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.883441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.883703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.883713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.883858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.883868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.883936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.883945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.884180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.884190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.884347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.884377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.884524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.884555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.884794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.884835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.884962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.884994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.885266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.885275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.885353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.885362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.885590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.885601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.885748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.885759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.885855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.885864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.886037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.886047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.886210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.886219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.886308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.886317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.886482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.886515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.886820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.886851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.887984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.887993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.888065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.888074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.888286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.888319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.888446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.888492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.888677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.888709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.889014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.889054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.889204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.889215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.889383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.889415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.889612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.889645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.889772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.889802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.890012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.890055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.890257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.890291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.571 [2024-11-06 12:38:38.890413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.571 [2024-11-06 12:38:38.890445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.571 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.890721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.890754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.891017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.891048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.891299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.891332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.891550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.891583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.891725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.891758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.891952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.891984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.892272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.892304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.892532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.892744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.892776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.893844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.893854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.894024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.894056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.894322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.894355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.894658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.894820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.894830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.895012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.895022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.895262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.895296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.895605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.895639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.895821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.895854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.896146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.896179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.896316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.896348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.896624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.896909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.896943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.897160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.897170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.897257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.897289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.897592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.897625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.897908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.897940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.898194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.898225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.898505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.898539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.572 qpair failed and we were unable to recover it. 00:32:07.572 [2024-11-06 12:38:38.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.572 [2024-11-06 12:38:38.898725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.898859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.898892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.899113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.899146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.899276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.899636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.899707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.899885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.899897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.900033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.900042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.900252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.900283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.900425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.900457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.900773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.900805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.901084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.901117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.901315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.901621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.901631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.901823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.901855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.902055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.902065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.902219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.902228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.902414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.902423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.902629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.902643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.902784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.902816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.903034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.903067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.903204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.903236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.903436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.903479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.903617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.903649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.903909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.903943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.904194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.904226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.904481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.904514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.904774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.904807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.905092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.905124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.905435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.905582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.905593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.905769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.905802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.906085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.906118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.906290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.906300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.906570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.906835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.906868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.907124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.907156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.907357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.907367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.907520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.573 [2024-11-06 12:38:38.907531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.573 qpair failed and we were unable to recover it. 00:32:07.573 [2024-11-06 12:38:38.907801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.907832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.908904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.908938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.909133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.909164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.909381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.909586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.909596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.909690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.909699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.909878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.909909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.910054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.910288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.910320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.910557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.910568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.910706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.910715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.910862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.910871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.911781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.911790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.912021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.912031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.912296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.912503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.912538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.912732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.912763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.912982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.913014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.913210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.913219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.913478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.913489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.913748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.913758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.913907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.913916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.914001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.914010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.914163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.914172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.914270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.914309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.914493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.914527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.914782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.914813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.574 [2024-11-06 12:38:38.915064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.574 [2024-11-06 12:38:38.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.574 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.915170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.915179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.915383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.915415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.915674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.915708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.915845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.915877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.916093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.916126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.916392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.916424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.916690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.916723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.916925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.916957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.917152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.917184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.917376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.917407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.917645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.917679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.917987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.918019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.918211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.918243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.918388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.918419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.918622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.918632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.918726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.918758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.919063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.919094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.919288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.919321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.919432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.919489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.919695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.919728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.919910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.919949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.920137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.920169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.920442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.920452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.920619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.920628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.920869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.920901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.921156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.921188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.921476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.921509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.921774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.921784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.921881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.921891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.922163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.922441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.922505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.922777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.922810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.923062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.923317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.923748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.923837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.923991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.575 [2024-11-06 12:38:38.924001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.575 qpair failed and we were unable to recover it. 00:32:07.575 [2024-11-06 12:38:38.924088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.924098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.924233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.924243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.924394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.924425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.924618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.924652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.924919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.924952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.925195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.925204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.925477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.925487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.925642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.925651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.925878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.925887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.926042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.926052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.926272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.926304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.926585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.926619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.926880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.926889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.927858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.927889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.928863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.928872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.929051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.929060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.929152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.929185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.929376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.929408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.929668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.929702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.929980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.930200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.930348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.930527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.930615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.576 [2024-11-06 12:38:38.930771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.576 [2024-11-06 12:38:38.930802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.576 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.931009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.931041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.931340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.931373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.931577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.931611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.931718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.931750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.932022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.932053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.932327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.932360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.932484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.932517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.932699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.932709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.932862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.932890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.933006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.933038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.933153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.933186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.933386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.933419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.933607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.933617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.933839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.933848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.934006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.934018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.934098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.934107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.934371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.934404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.934604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.934637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.934921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.934952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.935136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.935169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.935419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.935450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.935739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.935749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.935898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.935931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.936221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.936252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.936445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.936454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.936625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.936658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.936812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.937086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.937118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.937368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.937406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.937642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.937652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.937816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.937825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.937918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.937927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.938005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.938014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.938219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.938228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.938398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.938431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.938640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.938671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.938783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.938815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.939002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.577 [2024-11-06 12:38:38.939035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.577 qpair failed and we were unable to recover it. 00:32:07.577 [2024-11-06 12:38:38.939298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.939330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.939532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.939686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.939696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.939869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.939879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.939958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.939968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.940102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.940112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.940295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.940304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.940550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.940776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.940808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.941066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.941098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.941356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.941387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.941524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.941534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.941632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.941642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.941853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.941884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.942100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.942132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.942346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.942377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.942573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.942585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.942776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.942810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.943989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.943999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.944223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.944254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.944400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.944432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.944701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.944734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.944933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.944966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.945159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.945190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.945400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.945669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.945701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.945898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.945908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.946040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.946049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.946230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.946240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.946451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.946493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.946761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.946793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.947047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.947207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.947238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.578 qpair failed and we were unable to recover it. 00:32:07.578 [2024-11-06 12:38:38.947505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.578 [2024-11-06 12:38:38.947539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.947762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.947794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.948011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.948044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.948166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.948197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.948327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.948353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.948534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.948544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.948796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.948806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.949051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.949083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.949283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.949316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.949441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.949483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.949693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.949703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.949941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.949974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.950098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.950128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.950395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.950427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.950729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.950764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.951089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.951373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.951623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.951853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.951974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.952217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.952477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.952640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.952793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.952875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.952884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.953930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.953940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.954098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.954181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.954258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.954361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.579 [2024-11-06 12:38:38.954674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.579 qpair failed and we were unable to recover it. 00:32:07.579 [2024-11-06 12:38:38.954800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.954833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.955014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.955045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.955238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.955248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.955331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.955340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.955544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.955745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.955775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.956078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.956112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.956363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.956372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.956523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.956533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.956686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.956695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.956951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.956984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.957185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.957217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.957408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.957418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.957555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.957565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.957633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.957643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.957904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.957936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.958988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.959056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.959065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.959169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.959180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.959417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.959449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.959738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.959770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.959969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.960001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.960179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.960211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.960450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.960491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.960746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.960778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.960989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.961022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.961324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.961358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.961544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.961577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.961701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.961734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.961950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.962098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.962129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.962412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.962443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.580 qpair failed and we were unable to recover it. 00:32:07.580 [2024-11-06 12:38:38.962705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.580 [2024-11-06 12:38:38.962739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.963005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.963038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.963316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.963348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.963561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.963571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.963719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.963728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.963974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.963984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.964120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.964152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.964355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.964387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.964573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.964842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.964852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.965034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.965066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.965270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.965301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.965438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.965483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.965654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.965663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.965830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.965863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.966116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.966147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.966279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.966311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.966444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.966454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.966712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.966743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.966996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.967944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.968155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.968187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.968299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.968332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.968532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.968565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.968762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.968931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.968964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.969169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.969201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.969452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.969494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.969685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.969717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.581 [2024-11-06 12:38:38.969841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.581 [2024-11-06 12:38:38.969874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.581 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.970764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.970773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.971028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.971057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.971240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.971271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.971502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.971536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.971655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.971664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.971816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.971825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.972052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.972061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.972150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.972160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.972335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.972344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.972536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.972802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.972835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.973937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.973970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.974109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.974142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.974342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.974374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.974568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.974650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.974660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.974885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.974922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.975934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.975944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.976086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.976574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.582 [2024-11-06 12:38:38.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.582 qpair failed and we were unable to recover it. 00:32:07.582 [2024-11-06 12:38:38.976726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.976736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.976920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.976952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.977939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.977948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.978912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.978944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.979067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.979100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.979390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.979430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.979672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.979683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.979758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.979768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.979907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.979918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.980051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.980060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.980330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.980362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.980511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.980546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.980684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.980716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.980982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.981013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.981215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.981247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.981523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.981554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.981728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.981738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.981832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.981842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.982005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.982016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.982132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.982164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.982370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.982402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.982599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.982631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.982826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.982835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.983051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.983128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.983138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.983311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.983523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.583 [2024-11-06 12:38:38.983557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.583 qpair failed and we were unable to recover it. 00:32:07.583 [2024-11-06 12:38:38.983787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.983819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.984019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.984049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.984231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.984263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.984444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.984485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.984630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.984639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.984848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.984858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.985011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.985243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.985275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.985455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.985497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.985706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.985737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.985938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.985970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.986238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.986271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.986543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.986553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.986766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.986775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.986992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.987150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.987159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.987369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.987378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.987448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.987652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.987661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.987924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.987955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.988176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.988209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.988410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.988443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.988582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.988600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.988742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.988752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.988918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.988927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.989041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.989073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.989411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.989615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.989626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.989726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.989736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.989897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.989930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.990126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.990158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.990288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.990469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.990503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.990724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.990733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.991037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.991070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.991324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.991355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.584 [2024-11-06 12:38:38.991551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.584 [2024-11-06 12:38:38.991585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.584 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.991797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.991807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.991943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.991952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.992036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.992046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.992293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.992423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.992455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.992667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.992699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.992925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.992958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.993141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.993171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.993379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.993410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.993700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.993732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.993916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.993948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.994164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.994196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.994395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.994427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.994577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.994610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.994731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.994979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.995011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.995212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.995243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.995446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.995455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.995667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.995699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.995915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.995948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.996943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.996952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.997163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.997196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.997396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.997428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.997693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.997727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.997999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.998031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.998216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.998247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.998550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.998560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.998719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.998728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.585 qpair failed and we were unable to recover it. 00:32:07.585 [2024-11-06 12:38:38.998824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.585 [2024-11-06 12:38:38.998834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.998899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.998910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:38.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:38.999803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.000849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.000883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.001090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.001121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.001374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.001383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.001605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.001639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.001825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.001857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.002040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.002071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.002345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.002355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.002562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.002572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.002735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.002745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.002993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.003025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.003325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.003558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.003569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.003742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.003751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.003919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.003928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.004017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.004026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.004269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.004301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.004581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.004614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.004818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.004850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.005046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.005077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.005294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.005326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.005538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.005549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.005649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.005680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.005878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.005909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.006110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.586 [2024-11-06 12:38:39.006142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.586 qpair failed and we were unable to recover it. 00:32:07.586 [2024-11-06 12:38:39.006330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.006361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.006563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.006596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.006752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.006762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.006946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.007267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.007299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.007498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.007532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.007746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.007778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.008000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.008009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.008186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.008217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.008352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.008383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.008635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.008667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.008856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.009010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.009019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.009105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.009114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.009252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.009598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.009631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.009851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.009883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.010190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.010222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.010449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.010732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.010741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.010896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.010906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.011139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.011148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.011296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.011306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.011391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.011400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.011536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.011546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.011702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.011743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.012944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.012955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.587 [2024-11-06 12:38:39.013894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.587 [2024-11-06 12:38:39.013903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.587 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.013985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.014159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.014238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.014248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.014385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.014395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.014650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.014660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.014835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.014847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.015001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.015010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.015144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.015154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.015392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.015424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.015589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.015622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.015852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.015883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.016133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.016165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.016348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.016379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.016615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.016760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.016770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.016918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.016928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.017080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.017337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.017521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.017554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.017828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.017837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.017910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.017919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.018953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.018962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.019868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.019997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.020029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.020211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.020242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.020447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.020522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.020640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.020673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.588 [2024-11-06 12:38:39.020853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.588 [2024-11-06 12:38:39.020885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.588 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.021202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.021234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.021479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.021661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.021743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.021752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.021929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.021938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.022934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.022966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.023085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.023117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.023325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.023334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.023530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.023564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.023815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.023848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.024910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.024920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.025063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.025072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.025208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.025217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.025317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.025327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.025483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.025493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.025735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.025767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.026020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.026053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.026333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.026366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.026685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.026718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.026985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.027018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.027267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.027298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.027456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.027661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.027922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.027955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.028150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.028183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.028430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.028440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.589 [2024-11-06 12:38:39.028529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.589 [2024-11-06 12:38:39.028539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.589 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.028736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.028746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.028838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.028848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.028997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.029925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.029939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.030015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.030048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.030266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.030298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.030446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.030488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.030687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.030719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.030904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.030935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.031086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.031117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.031388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.031398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.031473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.031483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.031642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.031675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.031814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.031845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.032901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.032910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.033931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.033940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.590 qpair failed and we were unable to recover it. 00:32:07.590 [2024-11-06 12:38:39.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.590 [2024-11-06 12:38:39.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.034964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.034996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.035216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.035247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.035432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.035476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.035680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.035711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.035857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.035889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.036138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.036170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.036423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.036574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.036607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.036783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.036925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.036935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.037098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.037131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.037418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.037451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.037678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.037711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.037968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.038270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.038464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.038574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.038740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.038840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.038873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.039905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.039937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.040143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.040176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.040305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.040336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.040481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.040515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.040698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.040729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.040860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.041029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.041060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.041348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.041380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.591 [2024-11-06 12:38:39.041591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.591 [2024-11-06 12:38:39.041601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.591 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.041841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.041912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.041921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.042086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.042222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.042231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.042478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.042518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.042651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.042683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.042905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.042937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.043194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.043505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.043539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.043839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.043870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.044053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.044084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.044360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.044391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.044538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.044548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.044629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.044639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.044844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.044853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.045022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.045032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.045175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.045184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.045335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.045369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.045518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.045782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.045815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.046821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.046852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.047041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.047074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.047197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.047229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.047438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.047503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.047680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.047689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.047772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.047782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.048076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.048108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.048361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.048523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.048534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.048602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.048611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.592 [2024-11-06 12:38:39.048813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.592 [2024-11-06 12:38:39.048823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.592 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.048955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.048964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.049152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.049184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.049385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.049417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.049700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.049734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.049861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.049892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.050198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.050230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.050376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.050409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.050687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.050697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.050832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.050864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.051148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.051178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.051496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.051529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.051756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.051766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.051919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.051929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.052089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.052123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.052382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.052517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.052551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.052742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.052751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.052926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.052959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.054116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.054148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.054428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.054467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.054751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.054761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.054917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.054927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.055084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.055117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.055430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.055641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.055682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.593 [2024-11-06 12:38:39.055780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.593 [2024-11-06 12:38:39.055790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.593 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.056924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.056956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.057068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.057100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.057247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.057279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.057402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.057434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.057632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.057664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.057846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.057879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.058099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.058131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.058427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.058466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.058747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.058778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.058953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.058963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.059206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.059245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.059444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.059483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.059681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.059713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.060859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.060891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.061120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.061152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.061297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.061329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.061530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.061562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.061825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.061835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.061930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.061939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.062968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.062978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.063187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.063197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.063401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.594 [2024-11-06 12:38:39.063410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.594 qpair failed and we were unable to recover it. 00:32:07.594 [2024-11-06 12:38:39.063553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.063562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.063650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.063660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.063736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.063746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.063913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.064385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.064417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.064613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.064861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.064893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.065033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.065065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.065277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.065307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.065522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.065555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.065755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.065788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.065989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.066062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.066072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.066225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.066234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.066390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.066423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.066615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.066645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.066853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.066884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.067070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.067109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.067313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.067344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.067601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.067635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.067841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.067850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.068076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.068109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.068357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.068611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.068621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.068798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.068831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.595 [2024-11-06 12:38:39.068997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.595 qpair failed and we were unable to recover it. 00:32:07.595 [2024-11-06 12:38:39.069330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.069362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.069506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.069539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.069817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.069847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.070059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.070069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.070234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.070265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.070575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.070607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.070737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.070769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.070925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.070935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.071134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.071167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.071293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.071326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.071522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.071554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.071819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.071829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.071984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.071993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.072144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.072177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.072470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.072504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.072704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.072747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.072889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.072899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.073041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.073073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.073386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.073419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.073613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.073647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.073779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.073809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.073994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.074217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.074370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.074547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.074720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.074895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.074926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.075086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.075118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.075310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.075382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.075699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.075769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.076029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.076100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.076370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.076409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.076623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.076798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.076831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.077000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.596 qpair failed and we were unable to recover it. 00:32:07.596 [2024-11-06 12:38:39.077118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.596 [2024-11-06 12:38:39.077148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.077418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.077451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.077670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.077891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.077900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.078070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.078100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.078296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.078328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.078547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.078557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.078698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.078708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.078976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.078985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.079159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.079168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.079413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.079445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.079649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.079681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.079930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.079940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.080197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.080228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.080474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.080508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.080733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.080742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.080829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.080838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.081925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.082074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.082106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.082383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.082414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.082610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.082619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.082699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.082708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.082810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.082843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.083127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.083309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.083341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.083548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.083558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.083700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.083709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.083864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.083897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.084117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.084148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.084352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.084643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.084747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.597 [2024-11-06 12:38:39.084756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.597 qpair failed and we were unable to recover it. 00:32:07.597 [2024-11-06 12:38:39.084905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.084938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.085805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.085839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.086866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.086899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.087047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.087078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.087520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.087554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.087684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.087694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.087853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.087863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.088887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.089161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.089194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.089519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.089814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.090052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.090267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.598 [2024-11-06 12:38:39.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.598 [2024-11-06 12:38:39.090435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.598 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.090613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.090648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.090793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.090825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.091044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.091077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.091362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.091526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.091552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.091712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.091722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.091975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.092006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.092210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.092242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.092389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.092431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.092612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.092656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.092941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.092975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.093301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.093549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.093584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.093804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.093817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.094086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.094123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.094360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.094681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.094691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.094921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.094931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.095920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.095929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.096088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.096125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.096322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.096355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.096486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.096520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.096705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.096737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.096940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.096971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.097227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.097259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.097599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.097642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.097799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.097809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.098068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.098099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.098293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.098325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.098611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.098644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.098835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.098845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.599 [2024-11-06 12:38:39.099082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.599 [2024-11-06 12:38:39.099113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.599 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.099257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.099290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.099506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.099540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.099821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.099852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.100106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.100138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.100333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.100364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.100554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.100563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.100792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.100824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.101947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.101956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.102847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.102857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.103087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.103097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.103256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.103287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.103497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.103769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.103802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.103987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.104019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.104160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.104192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.104338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.104370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.104552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.104586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.104838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.104870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.105020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.105030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.105107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.105117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.105327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.600 [2024-11-06 12:38:39.105337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.600 qpair failed and we were unable to recover it. 00:32:07.600 [2024-11-06 12:38:39.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.105486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.105564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.105574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.105742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.105908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.105918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.106093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.106102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.106247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.106278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.106652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.106910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.106922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.107076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.107087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.107150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.107399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.107874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.107885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.108916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.108949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.109153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.109184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.109476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.109512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.109629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.109638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.109826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.109858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.110912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.601 qpair failed and we were unable to recover it. 00:32:07.601 [2024-11-06 12:38:39.110995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.601 [2024-11-06 12:38:39.111004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.111904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.111913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.112073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.112105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.112300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.112332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.112679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.112711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.112962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.112995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.113138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.113169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.113422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.113673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.113875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.113884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.114963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.115956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.115966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.116118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.116149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.116359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.116391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.116532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.116566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.116794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.116827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.117031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.117062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.117284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.117422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.117453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.117673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.117704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.117921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.118103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.118135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.118281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.118313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.118530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.118562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.118799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.602 [2024-11-06 12:38:39.118985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.602 [2024-11-06 12:38:39.119016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.602 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.119170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.119202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.119484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.119517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.119762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.119772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.119929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.119962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.120240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.120272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.120480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.120505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.120738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.120748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.120886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.120895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.121144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.121310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.121480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.121702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.121885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.121985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.122975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.122984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.123061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.123091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.123305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.123337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.123554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.123589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.123721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.123755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.123951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.123983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.124193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.124226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.124505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.124537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.124732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.124765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.124882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.124901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.125140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.125348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.125358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.125567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.125578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.125665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.125675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.125908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.125918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.126057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.126089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.126223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.126254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.126478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.126511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.126707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.126961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.126993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.127182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.603 [2024-11-06 12:38:39.127213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.603 qpair failed and we were unable to recover it. 00:32:07.603 [2024-11-06 12:38:39.127417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.127449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.127662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.127672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.127810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.127851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.128139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.128171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.128367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.128399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.128602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.128637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.128889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.128920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.129191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.129200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.129343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.129374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.129600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.129634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.129838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.130087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.130096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.130284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.130316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.130524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.130557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.130740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.130773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.130896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.130905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.131037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.131049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.131210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.131240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.131496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.131529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.131768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.131801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.131985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.132017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.132143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.132176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.132485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.132518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.132772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.132804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.133055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.133086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.133223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.133254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.133441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.133702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.133733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.133977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.133987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.134080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.134090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.134184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.134193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.134365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.134396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.134658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.134691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.134821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.134853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.135028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.135037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.135187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.135219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.135332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.135364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.135548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.135581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.135800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.135809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.604 [2024-11-06 12:38:39.136046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.604 [2024-11-06 12:38:39.136055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.604 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.136145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.136154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.136291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.136300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.136504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.136514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.136700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.136733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.136856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.136889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.137071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.137104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.137289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.137320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.137537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.137569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.137757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.137766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.137913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.137922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.138097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.138106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.138263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.138295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.138420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.138453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.138648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.138944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.138977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.139228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.139261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.139481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.139521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.139657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.139806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.139815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.140035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.140067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.140335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.140368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.140507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.140542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.140839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.140871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.141123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.141156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.141474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.141507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.141708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.141740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.141925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.141956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.142148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.142158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.142305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.142314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.142413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.142422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.142631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.142665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.142945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.142976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.143099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.143130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.143353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.143378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.143657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.143689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.143808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.143840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.144038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.144070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.144362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.144545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.144578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.144769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.144802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.145108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.145118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.145325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.145334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.145489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.145498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.145632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.605 [2024-11-06 12:38:39.145642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.605 qpair failed and we were unable to recover it. 00:32:07.605 [2024-11-06 12:38:39.145853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.145863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.145954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.145963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.146045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.146055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.146210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.146381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.146391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.146549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.146582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.146699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.146731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.147851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.147863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.148003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.148013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.148151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.148183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.148366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.148399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.148655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.148688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.148864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.148874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.149032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.149041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.149174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.149317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.149326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.606 [2024-11-06 12:38:39.149487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.606 [2024-11-06 12:38:39.149496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.606 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.149701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.149711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.149863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.149872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.149971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.149981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.150946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.150956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.151913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.151922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.152797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.152807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.153919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.153993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.890 qpair failed and we were unable to recover it. 00:32:07.890 [2024-11-06 12:38:39.154955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.890 [2024-11-06 12:38:39.154987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.155101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.155347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.155379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.155577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.155611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.155744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.155776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.155907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.155939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.156198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.156208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.156391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.156400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.156584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.156594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.156737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.156747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.156982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.156991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.157143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.157176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.157473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.157507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.157759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.157793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.158020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.158051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.158274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.158307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.158428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.158469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.158741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.158774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.159055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.159086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.159272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.159305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.159520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.159553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.159850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.159860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.160066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.160076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.160256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.160266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.160511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.160545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.160734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.160743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.160919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.160950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.161140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.161171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.161363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.161394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.161535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.161567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.161768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.161799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.162111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.162143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.162343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.162352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.162504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.162514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.162820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.163072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.163082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.163193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.163204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.163384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.163416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.891 [2024-11-06 12:38:39.163635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.891 [2024-11-06 12:38:39.163668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.891 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.163949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.163981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.164212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.164221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.164356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.164399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.164595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.164846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.164855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.164941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.164951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.165817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.165995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.166005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.166159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.166190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.166420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.166451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.166725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.166972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.166981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.167066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.167076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.167215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.167223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.167407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.167439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.167732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.167764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.167998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.168029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.168281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.168312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.168555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.168588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.168790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.168800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.168893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.169035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.169067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.169182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.169480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.169513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.169661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.169692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.169905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.169915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.170068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.170099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.170290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.170322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.170577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.170610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.170804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.171090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.171221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.171230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.892 [2024-11-06 12:38:39.171328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.892 [2024-11-06 12:38:39.171340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.892 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.171405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.171414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.171660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.171692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.171915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.171947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.172061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.172093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.172336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.172594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.172627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.172809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.172842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.173063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.173095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.173305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.173338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.173521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.173554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.173739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.173771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.173918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.173950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.174095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.174128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.174344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.174375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.174654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.174685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.174817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.174849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.174970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.175924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.175934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.176148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.176180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.176365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.176396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.176686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.176719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.176902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.176912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.177084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.177117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.177310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.177340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.177481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.177515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.177696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.177727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.177980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.178154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.178482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.178713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.893 [2024-11-06 12:38:39.178861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.893 [2024-11-06 12:38:39.178870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.893 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.179843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.179874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.180000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.180032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.180231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.180263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.180496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.180529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.180765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.180774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.181015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.181046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.181368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.181629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.181663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.181805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.181815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.181899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.181909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.182074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.182105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.182358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.182390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.182542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.182577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.182731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.182910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.182941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.183871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.183903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.184106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.184137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.184354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.184387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.184671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.184740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d87550 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.185016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.185044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.185237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.185337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.185381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.894 [2024-11-06 12:38:39.185656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.894 [2024-11-06 12:38:39.185690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.894 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.185962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.185994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.186119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.186128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.186272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.186282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.186471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.186482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.186687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.186696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.186850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.186882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.187025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.187058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.187195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.187225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.187521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.187694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.187725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.187904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.187935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.188832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.188863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.189052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.189269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.189300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.189554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.189587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.189773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.189782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.189932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.189965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.190103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.190135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.190343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.190376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.190560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.190594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.190827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.190858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.190976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.191007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.191128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.191137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.191366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.191399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.191620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.191652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.191779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.191812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.895 [2024-11-06 12:38:39.192928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.895 [2024-11-06 12:38:39.192938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.895 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.193083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.193114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.193297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.193329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.193483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.193515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.193794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.193825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.194029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.194062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.194286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.194410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.194442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.194584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.194615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.194925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.194957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.195815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.195845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.196921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.197103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.197135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.197280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.197313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.197438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.197479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.197703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.197734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.197992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.198088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.198097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.198252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.198403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.198413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.198592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.198625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.198829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.198861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.199141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.199172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.199324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.199357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.199495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.199529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.199718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.199750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.199933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.200154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.200163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.200334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.200367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.896 qpair failed and we were unable to recover it. 00:32:07.896 [2024-11-06 12:38:39.200656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.896 [2024-11-06 12:38:39.200690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.200829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.200861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.201098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.201107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.201259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.201269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.201340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.201349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.201541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.201727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.201759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.202012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.202044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.202163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.202375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.202609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.202643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.202848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.202879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.203070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.203103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.203323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.203361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.203647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.203680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.203819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.204132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.204163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.204441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.204450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.204619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.204652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.204831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.204840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.204934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.204943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.205096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.205128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.205320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.205352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.205481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.205514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.205817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.205848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.205992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.206023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.206153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.206185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.206297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.206307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.206545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.206578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.206831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.206863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.207002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.207033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.207312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.207342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.207622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.207655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.207833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.207843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.208004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.208012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.208218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.208248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.208501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.208839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.208871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.897 [2024-11-06 12:38:39.209126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.897 [2024-11-06 12:38:39.209157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.897 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.209409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.209439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.209664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.209702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.209924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.209934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.210135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.210275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.210307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.210446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.210487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.210619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.210650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.210795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.210827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.211009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.211041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.211374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.211505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.211793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.211825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.211959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.211990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.212254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.212263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.212338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.212348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.212555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.212565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.212699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.212710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.212949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.212980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.213097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.213129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.213410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.213441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.213574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.213606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.213739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.213770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.214074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.214107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.214290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.214320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.214450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.214491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.214686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.214718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.214937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.215810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.215841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.216020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.216052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.216267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.216277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.216710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.216720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.898 [2024-11-06 12:38:39.216878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.898 [2024-11-06 12:38:39.216887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.898 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.216969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.216978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.217130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.217139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.217227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.217236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.217383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.217393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.217499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.217532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.217780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.217811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.218772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.218996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.219257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.219289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.219541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.219573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.219855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.219995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.220026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.220226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.220259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.220563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.220596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.220799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.220832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.221028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.221060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.221170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.221191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.221324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.221333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.221570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.221601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.221800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.221833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.222015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.222047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.222179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.222410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.222442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.222686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.222718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.222832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.222865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.223116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.223317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.223327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.223512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.223522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.223674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.899 [2024-11-06 12:38:39.223706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.899 qpair failed and we were unable to recover it. 00:32:07.899 [2024-11-06 12:38:39.223908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.223941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.224154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.224185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.224381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.224391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.224536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.224547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.224706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.224715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.224932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.224964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.225138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.225169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.225380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.225412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.225616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.225647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.225900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.225938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.226947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.227186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.227218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.227346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.227377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.227498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.227531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.227673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.227706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.227839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.227869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.228003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.228036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.228258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.228500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.228534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.228788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.228820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.229068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.229078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.229209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.229432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.229470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.229614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.229646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.229829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.229861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.230045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.230078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.230341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.230373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.230489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.230522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.230702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.230735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.231011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.231042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.231220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.231230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.231402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.231411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.900 [2024-11-06 12:38:39.231645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.900 [2024-11-06 12:38:39.231654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.900 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.231798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.231807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.231991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.232023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.232245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.232277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.232477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.232510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.232783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.232815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.232946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.233195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.233226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.233434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.233475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.233705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.233736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.233920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.233930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.234090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.234200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.234415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.234603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.234846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.234976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.235006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.235260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.235292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.235478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.235511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.235793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.235823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.236019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.236050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.236302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.236335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.236627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.236661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.236892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.237119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.237340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.237371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.237649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.237682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.237882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.237915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.238125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.238156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.238418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.238587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.238596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.238731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.238740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.238889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.238898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.239049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.239082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.239296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.239498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.239531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.239660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.239693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.239894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.239925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.240192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.901 [2024-11-06 12:38:39.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.901 qpair failed and we were unable to recover it. 00:32:07.901 [2024-11-06 12:38:39.240428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.240478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.240699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.240732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.240940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.240972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.241208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.241407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.241439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.241640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.241672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.241942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.241952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.242117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.242126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.242281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.242530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.242562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.242698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.242730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.242921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.242955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.243167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.243177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.243388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.243432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.243598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.243632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.243746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.243778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.243921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.243954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.244173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.244338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.244370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.244588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.244621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.244808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.245107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.245139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.245399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.245640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.245674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.245955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.245987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.246208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.246239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.246492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.246523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.246718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.246750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.246996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.247005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.247210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.247219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.247383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.247414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.247553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.247586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.247767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.247798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.248077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.248109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.248294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.248326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.248450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.248512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.248712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.248743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.248925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.902 [2024-11-06 12:38:39.248958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.902 qpair failed and we were unable to recover it. 00:32:07.902 [2024-11-06 12:38:39.249141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.249351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.249360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.249571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.249581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.249715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.249725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.249920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.249929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.250142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.250175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.250360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.250390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.250524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.250676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.250686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.250919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.250950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.251094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.251128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.251270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.251300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.251436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.251476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.251675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.251684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.251953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.252139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.252176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.252299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.252332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.252526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.252754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.252763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.252932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.252941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.253979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.253988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.254950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.254960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.255060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.903 [2024-11-06 12:38:39.255069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.903 qpair failed and we were unable to recover it. 00:32:07.903 [2024-11-06 12:38:39.255204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.255213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.255368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.255399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.255592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.255626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.255772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.255804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 358414 Killed "${NVMF_APP[@]}" "$@" 00:32:07.904 [2024-11-06 12:38:39.255940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.255974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.256109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.256268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.256371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.256590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:07.904 [2024-11-06 12:38:39.256740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.256925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:07.904 [2024-11-06 12:38:39.257082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.904 [2024-11-06 12:38:39.257301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.257398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.257541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:07.904 [2024-11-06 12:38:39.257694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.257795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.257804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:07.904 [2024-11-06 12:38:39.258018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.258898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.258907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.259960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.259970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.260053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.260062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.260197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.260206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.260289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.260299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.904 [2024-11-06 12:38:39.260402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.904 qpair failed and we were unable to recover it. 00:32:07.904 [2024-11-06 12:38:39.260471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.260481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.260565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.260576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.260729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.260738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.260876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.260885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.260968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.260978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.261960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.261969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.262852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.262862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.263843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.263852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=359231 00:32:07.905 [2024-11-06 12:38:39.264723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 359231 00:32:07.905 [2024-11-06 12:38:39.264880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 [2024-11-06 12:38:39.264965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.264978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:07.905 [2024-11-06 12:38:39.265182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.905 [2024-11-06 12:38:39.265193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.905 qpair failed and we were unable to recover it. 00:32:07.905 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 359231 ']' 00:32:07.905 [2024-11-06 12:38:39.265282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.265291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.265465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.265475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.906 [2024-11-06 12:38:39.265723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.265732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.265801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.265811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:07.906 [2024-11-06 12:38:39.265961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.265972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.266054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.906 [2024-11-06 12:38:39.266256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:07.906 [2024-11-06 12:38:39.266422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.266547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:07.906 [2024-11-06 12:38:39.266732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.266831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.266842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.267835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.268892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.268901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.269987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.269997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.270176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.270277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.270422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.270599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.270821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.906 [2024-11-06 12:38:39.270995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.906 [2024-11-06 12:38:39.271005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.906 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.271939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.272779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.272788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.273921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.274854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.274863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.275941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.276133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.276142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.276242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.276251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.276329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.276429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.907 [2024-11-06 12:38:39.276439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.907 qpair failed and we were unable to recover it. 00:32:07.907 [2024-11-06 12:38:39.276626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.276638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.276793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.276929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.276939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.277029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.277039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.277212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.277221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.277425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.277604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.277614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.277847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.277857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.278956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.278966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.279936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.279946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.280981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.280990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.281091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.281101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.281206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.281215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.281348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.281357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.281505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.281515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.908 qpair failed and we were unable to recover it. 00:32:07.908 [2024-11-06 12:38:39.281655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.908 [2024-11-06 12:38:39.281665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.281755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.281764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.281861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.281871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.282970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.282980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.283989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.283999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.284874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.284884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.285992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.286070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.286079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.286239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.286248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.909 [2024-11-06 12:38:39.286322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.909 [2024-11-06 12:38:39.286332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.909 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.286476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.286486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.286632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.286710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.286720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.286855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.286864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.287959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.287969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.288961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.288970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.289861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.289871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.290944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.290954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.291022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.291031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.291195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.291221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.291317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.291328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.291478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.910 [2024-11-06 12:38:39.291488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.910 qpair failed and we were unable to recover it. 00:32:07.910 [2024-11-06 12:38:39.291642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.291755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.291764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.291854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.292923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.292932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.293919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.293929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.294926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.295948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.295958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.296934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.296943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.297115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.297129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.911 [2024-11-06 12:38:39.297303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.911 [2024-11-06 12:38:39.297313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.911 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.297401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.297590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.297599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.297689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.297699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.297846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.297857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.298086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.298096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.298300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.298310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.298442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.298453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.298680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.298690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.298912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.298923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.299899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.299909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.300946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.300955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.301936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.301946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.302153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.302162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.302412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.302610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.302622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.912 [2024-11-06 12:38:39.302685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.912 [2024-11-06 12:38:39.302695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.912 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.302854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.302864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.303973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.303982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.304854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.304999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.305934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.305943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.306974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.306984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.307934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.307943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.308014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.308024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.308115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.308125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.308210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.308220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.308299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.913 [2024-11-06 12:38:39.308308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.913 qpair failed and we were unable to recover it. 00:32:07.913 [2024-11-06 12:38:39.308390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.308399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.308624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.308634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.308700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.308710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.308857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.308867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.309865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.309874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.310046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.310056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.310314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.310323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.310475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.310485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.310745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.310755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.310906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.310915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.311941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.311951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.312954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.312964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.313910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.313920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.314004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.314013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.914 [2024-11-06 12:38:39.314244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.914 [2024-11-06 12:38:39.314253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.914 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.314385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.314395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.314543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.314553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.314735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.314744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.314880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.314891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.314955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.314965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.315843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.315853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.316964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.316973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.317059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.317069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.317218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.317227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.317485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.317496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.317726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.317736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.317915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.318133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.318143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.318275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.318284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.318400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.318410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.318670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.318680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.318830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.318839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.319845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.319992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.320002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.320070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.320081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.320254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.320264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.320486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.320666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.915 [2024-11-06 12:38:39.320677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.915 qpair failed and we were unable to recover it. 00:32:07.915 [2024-11-06 12:38:39.320811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.320822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.320960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.320969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.321939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.321949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322056] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:32:07.916 [2024-11-06 12:38:39.322110] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.916 [2024-11-06 12:38:39.322273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.322958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.322968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.323972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.323982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.324968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.325115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.325124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.325309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.325318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.325544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.325554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.325763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.325773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.325848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.325857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.326150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.326306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.326475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.326624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.326858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.326994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.327004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.327075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.327086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.916 qpair failed and we were unable to recover it. 00:32:07.916 [2024-11-06 12:38:39.327234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.916 [2024-11-06 12:38:39.327244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.327502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.327512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.327648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.327657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.327831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.327840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.327979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.327989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.328952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.328962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.329855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.329992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.330883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.330892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.331988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.331998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.332151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.332162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.332389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.917 [2024-11-06 12:38:39.332399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.917 qpair failed and we were unable to recover it. 00:32:07.917 [2024-11-06 12:38:39.332487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.332498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.332591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.332600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.332848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.332857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.332991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.333885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.333894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.334925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.334935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.335920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.336911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.336921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.337945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.337955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.338045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.338295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.338374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.338469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.918 [2024-11-06 12:38:39.338686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.918 qpair failed and we were unable to recover it. 00:32:07.918 [2024-11-06 12:38:39.338852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.338863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.339953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.339962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.340972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.340982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.341204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.341214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.341296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.341306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.341453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.341467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.341629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.341644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.341825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.341834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.342869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.342879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.343836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.343845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.344820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.344829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.919 qpair failed and we were unable to recover it. 00:32:07.919 [2024-11-06 12:38:39.345732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.919 [2024-11-06 12:38:39.345742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.345829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.345839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.345979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.345989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.346926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.346936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.347858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.347867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.348987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.348996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.349974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.349983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.350954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.350963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.920 [2024-11-06 12:38:39.351824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.920 [2024-11-06 12:38:39.351833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.920 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.351986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.351996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.352873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.352882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.353911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.353920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.354865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.354874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.355947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.355957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.356905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.356915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.357883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.357892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.921 qpair failed and we were unable to recover it. 00:32:07.921 [2024-11-06 12:38:39.358830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.921 [2024-11-06 12:38:39.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.358919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.358928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.359808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.360878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.360887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.361849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.361859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.362915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.362925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.363973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.363983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.364934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.364943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.365026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.365106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.365115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.365227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.365433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.365442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.922 qpair failed and we were unable to recover it. 00:32:07.922 [2024-11-06 12:38:39.365596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.922 [2024-11-06 12:38:39.365606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.365777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.365787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.365852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.365861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.365948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.365958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.366920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.367923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.367933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.368033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.368043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.368253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.368263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.368511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.368521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.368693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.368933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.368943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.369977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.370957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.370967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.371818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.371995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.923 qpair failed and we were unable to recover it. 00:32:07.923 [2024-11-06 12:38:39.372981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.923 [2024-11-06 12:38:39.372990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.373853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.373996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.374890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.375938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.376909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.376919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.377927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.377937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.378951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.378960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.379038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.379047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.379121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.924 qpair failed and we were unable to recover it. 00:32:07.924 [2024-11-06 12:38:39.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.924 [2024-11-06 12:38:39.379215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.379395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.379474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.379484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.379716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.379726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.379804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.379815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.380891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.380901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.381870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.382976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.382986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.383148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.383298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.383627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.383862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.384962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.384972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.385908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.385917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.386841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.386993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.925 [2024-11-06 12:38:39.387002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.925 qpair failed and we were unable to recover it. 00:32:07.925 [2024-11-06 12:38:39.387174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.387344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.387495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.387666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.387768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.387863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.387872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.388891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.388901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.389177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.389187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.389395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.389404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.389561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.389571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.389660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.389669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.389858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.389867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.390943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.390953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.391846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.391855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.392975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.392985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.393948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.393957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.394949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.394958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.395156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.926 [2024-11-06 12:38:39.395166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.926 qpair failed and we were unable to recover it. 00:32:07.926 [2024-11-06 12:38:39.395310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.395319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.395471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.395480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.395500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.927 [2024-11-06 12:38:39.395644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.395655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.395812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.395822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.396961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.396971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.397913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.397923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.398830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.398840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.399984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.400958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.400969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.401943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.401954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.927 [2024-11-06 12:38:39.402057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.927 [2024-11-06 12:38:39.402067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.927 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.402232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.402311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.402488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.402658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.402754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.402990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.403888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.403898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.404036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.404045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.404261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.404466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.404477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.404698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.404727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.404921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.404936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.405905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.405915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.406134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.406144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.406248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.406434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.406444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.406663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.406817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.406827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.407047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.407061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.407203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.407213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.407447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.407457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.407693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.407703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.407904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.407914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.408947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.409966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.410077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.410169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.410179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.410361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.410371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.410539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.410550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.928 [2024-11-06 12:38:39.410689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.928 [2024-11-06 12:38:39.410699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.928 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.410794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.410883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.410894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.411877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.411887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.412932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.412942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.413919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.413929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.414915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.414925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.415166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.415175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.415590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.415600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.415854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.415864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.415951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.415961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.416923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.416932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.417947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.417957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.418889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.418899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.929 [2024-11-06 12:38:39.419059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.929 [2024-11-06 12:38:39.419069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.929 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.419959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.419969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.420947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.420956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.421923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.421932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.422865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.422875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.423949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.423958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.424919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.424929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.425074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.425084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.425159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.425168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.425419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.425429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.425580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.425590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.425821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.425831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.426948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.930 [2024-11-06 12:38:39.426958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.930 qpair failed and we were unable to recover it. 00:32:07.930 [2024-11-06 12:38:39.427041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.427962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.427972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.428931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.428940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.429876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.429886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.430919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.430928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.431976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.431985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.432145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.432408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.432418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.931 qpair failed and we were unable to recover it. 00:32:07.931 [2024-11-06 12:38:39.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.931 [2024-11-06 12:38:39.432521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.432661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.432823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.432832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.432995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.433821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.433831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.434832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.932 [2024-11-06 12:38:39.435700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.932 [2024-11-06 12:38:39.435706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.932 [2024-11-06 12:38:39.435712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.932 [2024-11-06 12:38:39.435717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.932 [2024-11-06 12:38:39.435842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.435852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.435993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.436957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.436967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.437057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.437068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.437229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.437383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.437395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.932 [2024-11-06 12:38:39.437311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:07.932 qpair failed and we were unable to recover it. 00:32:07.932 [2024-11-06 12:38:39.437404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:07.932 [2024-11-06 12:38:39.437609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.932 [2024-11-06 12:38:39.437540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:07.932 [2024-11-06 12:38:39.437622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.932 [2024-11-06 12:38:39.437540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.437699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.437708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.437797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.437807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.437957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.437967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.438925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.438935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.439095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.439105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.439349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.439359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.439568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.439579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.439661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.439670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.439904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.439914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.440913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.440922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.441065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.441075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.441309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.441319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.441513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.441533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.441777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.441790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.441941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.442125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.442134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.442368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.442560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.442571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.442723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.442733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.442965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.442974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.443071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.443080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.443151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.443161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.443297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.443307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.443480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.443491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.933 qpair failed and we were unable to recover it. 00:32:07.933 [2024-11-06 12:38:39.443587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.933 [2024-11-06 12:38:39.443596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.443669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.443680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.443825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.444954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.444963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.445820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.445830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.446842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.446852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.447970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.447980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.448060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.448070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.448223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.448233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.448378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.448388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.934 [2024-11-06 12:38:39.448546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.934 [2024-11-06 12:38:39.448556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.934 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.448728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.448738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.448884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.448893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.448959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.448969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.449900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.450961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.450971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.451872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.451882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.452942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.452953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.935 [2024-11-06 12:38:39.453117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.935 qpair failed and we were unable to recover it. 00:32:07.935 [2024-11-06 12:38:39.453271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.453448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.453613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.453789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.453955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.453966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.454963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.454973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.455869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.455879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.456938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.456948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.457101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.457192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.457347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.457585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.457752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.457999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.458969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.458981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:07.936 qpair failed and we were unable to recover it. 00:32:07.936 [2024-11-06 12:38:39.459249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.936 [2024-11-06 12:38:39.459271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.459438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.459454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.459669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.459680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.459768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.459923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.459934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.460811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.460821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.461767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.461996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.462834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.463847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.463856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.937 [2024-11-06 12:38:39.464763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.937 [2024-11-06 12:38:39.464773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.937 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.464907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.464917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.465096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.465112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.465322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.465333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.465411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.465421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.465626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.465637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.465851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.465861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.466973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.466983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.467986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.467997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.468843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.468854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.469908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.469918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.470073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.470084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.470176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.470187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.470282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.470292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.470468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.938 [2024-11-06 12:38:39.470482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.938 qpair failed and we were unable to recover it. 00:32:07.938 [2024-11-06 12:38:39.470547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.470557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.470733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.470744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.471890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.471900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.472971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.472981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.473907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.474904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.474914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.475958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.475968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.476045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.476055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.476231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.476242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.476383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.476394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.476565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.939 [2024-11-06 12:38:39.476701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.939 [2024-11-06 12:38:39.476711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.939 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.476777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.476787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.477877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.477887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.478815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.478825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.479976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.479985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.480878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.480888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.481054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.481065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.481158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.481168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.481241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.481251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.481388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.940 [2024-11-06 12:38:39.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.940 qpair failed and we were unable to recover it. 00:32:07.940 [2024-11-06 12:38:39.481472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.941 [2024-11-06 12:38:39.481482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.941 qpair failed and we were unable to recover it. 00:32:07.941 [2024-11-06 12:38:39.481563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.941 [2024-11-06 12:38:39.481573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.941 qpair failed and we were unable to recover it. 00:32:07.941 [2024-11-06 12:38:39.481708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.941 [2024-11-06 12:38:39.481718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.941 qpair failed and we were unable to recover it. 00:32:07.941 [2024-11-06 12:38:39.481868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:07.941 [2024-11-06 12:38:39.481878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:07.941 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.482828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.482839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.483982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.484927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.484937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.485029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.220 [2024-11-06 12:38:39.485039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.220 qpair failed and we were unable to recover it. 00:32:08.220 [2024-11-06 12:38:39.485132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.485142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.485390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.485547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.485558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.485636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.485817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.485827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.486909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.486919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.487986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.488972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.488984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.489989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.489999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.490077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.490087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.490164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.490174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.490245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.490254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.221 qpair failed and we were unable to recover it. 00:32:08.221 [2024-11-06 12:38:39.490389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.221 [2024-11-06 12:38:39.490399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.490547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.490557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.490636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.490646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.490874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.490884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.490970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.490980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.491890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.491900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.492863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.492873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.493899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.493908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.494781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.494791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.495053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.495063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.495295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.495304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.495443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.222 [2024-11-06 12:38:39.495453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.222 qpair failed and we were unable to recover it. 00:32:08.222 [2024-11-06 12:38:39.495610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.495621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.495826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.495836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.495915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.495925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.496976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.496986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.497980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.497990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.498965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.498975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.499881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.223 [2024-11-06 12:38:39.500063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.223 qpair failed and we were unable to recover it. 00:32:08.223 [2024-11-06 12:38:39.500208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.500922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.500992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.501987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.502941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.502951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.503980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.503994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.224 [2024-11-06 12:38:39.504142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.224 [2024-11-06 12:38:39.504152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.224 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.504923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.504933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.505981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.506877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.506887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.507897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.507907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.225 [2024-11-06 12:38:39.508954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.225 [2024-11-06 12:38:39.508963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.225 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.509933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.509943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.510942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.510952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.511927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.511937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.512976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.512987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.513884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.514123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.226 [2024-11-06 12:38:39.514132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.226 qpair failed and we were unable to recover it. 00:32:08.226 [2024-11-06 12:38:39.514212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.514923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.514932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.515934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.515943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.516853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.516863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.517887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.517897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.227 [2024-11-06 12:38:39.518974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.227 [2024-11-06 12:38:39.518983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.227 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.519984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.519994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.520888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.520897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.521965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.521976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.522990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.522999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.523948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.523958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.524120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.524129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.228 [2024-11-06 12:38:39.524345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.228 qpair failed and we were unable to recover it. 00:32:08.228 [2024-11-06 12:38:39.524529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.524540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.524645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.524655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.524744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.524831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.524840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.524907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.524995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.525939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.525948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.229 qpair failed and we were unable to recover it. 00:32:08.229 [2024-11-06 12:38:39.526832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.229 [2024-11-06 12:38:39.526842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.526976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.526985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.527980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.527990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.528959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.528968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.529912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.529992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.530905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.531075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.531324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.531483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.531644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.230 [2024-11-06 12:38:39.531728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.230 [2024-11-06 12:38:39.531738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.230 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.531886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.532853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.532862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.533938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.533948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.534920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.534930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.535986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.535995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.536848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.536857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.537039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.537049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.537300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.231 [2024-11-06 12:38:39.537310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.231 qpair failed and we were unable to recover it. 00:32:08.231 [2024-11-06 12:38:39.537483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.537493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.537588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.537735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.537745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.537949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.537958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.538907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.538917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.539850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.539861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.540902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.540912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.541063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.541072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.541238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.541248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.541423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.541432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.541667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.541677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.541828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.541838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:08.232 [2024-11-06 12:38:39.542838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.542914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.542924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.543004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.543013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:32:08.232 [2024-11-06 12:38:39.543220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.543230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.232 [2024-11-06 12:38:39.543400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.232 [2024-11-06 12:38:39.543410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.232 qpair failed and we were unable to recover it. 00:32:08.233 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.233 [2024-11-06 12:38:39.543543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.543554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.543622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.543632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.543713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.543723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.233 [2024-11-06 12:38:39.543879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.543889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.233 [2024-11-06 12:38:39.544094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.544843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.544852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.545879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.545889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.546061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.546276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.546394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.546559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.546778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.546989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.547172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.547334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.547630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.547778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.547948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.547960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.548110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.548121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.548412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.548423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.548509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.548519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.233 qpair failed and we were unable to recover it. 00:32:08.233 [2024-11-06 12:38:39.548693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.233 [2024-11-06 12:38:39.548702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.548796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.548805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.549893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.549903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.550951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.551896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.551907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.552845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.552989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.553000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.553065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.553075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.553148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.553158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.553324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.553334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.553408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.234 [2024-11-06 12:38:39.553419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.234 qpair failed and we were unable to recover it. 00:32:08.234 [2024-11-06 12:38:39.553529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.553539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.553751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.553764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.554921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.554931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.555959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.555968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.556927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.556937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.557935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.558098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.558108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.558352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.558362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.558433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.235 qpair failed and we were unable to recover it. 00:32:08.235 [2024-11-06 12:38:39.558524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.235 [2024-11-06 12:38:39.558533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.558793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.558803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.558877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.558886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.559871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.560983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.560993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.561856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.561865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.562992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.563083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.563093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.563163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.563174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.563248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.563257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.236 [2024-11-06 12:38:39.563350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.236 [2024-11-06 12:38:39.563361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.236 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.563497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.563509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.563593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.563604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.563670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.563679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.563908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.563918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.563993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.564949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.564960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.565895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.565905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.566922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.566932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.567976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.567985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.568123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.568133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.237 [2024-11-06 12:38:39.568217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.237 [2024-11-06 12:38:39.568227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.237 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.568977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.568986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.569943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.569952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.570891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.570901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.571847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.571857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.572008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.572018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.572201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.572211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.572360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.572370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.572444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.572454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.238 qpair failed and we were unable to recover it. 00:32:08.238 [2024-11-06 12:38:39.572525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.238 [2024-11-06 12:38:39.572536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.572638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.572785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.572795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.572973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.572983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.573989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.574897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.574906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.575833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.575843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.576871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.576881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.577037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.577204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.577214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.239 [2024-11-06 12:38:39.577280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.239 [2024-11-06 12:38:39.577292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.239 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.577443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.577454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.577543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.577553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.577692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.577702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.577837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.577847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.578933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.578942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.240 [2024-11-06 12:38:39.579607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.579859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.579869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:08.240 [2024-11-06 12:38:39.580005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.240 [2024-11-06 12:38:39.580367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 [2024-11-06 12:38:39.580689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.580996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.240 [2024-11-06 12:38:39.581584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.240 qpair failed and we were unable to recover it. 00:32:08.240 [2024-11-06 12:38:39.581652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.581662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.581757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.581767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.581844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.581854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.581925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.581935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.582956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.582965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.583954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.583964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.584859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.584869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.585969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.585979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.586066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.586076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.241 [2024-11-06 12:38:39.586148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.241 [2024-11-06 12:38:39.586157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.241 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.586859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.587908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.588919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.588929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.589969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.589979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.590962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.590975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.242 [2024-11-06 12:38:39.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.242 qpair failed and we were unable to recover it. 00:32:08.242 [2024-11-06 12:38:39.591860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.591870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.591957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.591966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.592921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.592999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.593946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.593956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2068000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.594880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.594889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.595956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.595965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.596235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.596346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.596537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.596713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.596994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.597004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.597138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.597147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.597209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.243 [2024-11-06 12:38:39.597295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.243 [2024-11-06 12:38:39.597304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.243 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.597942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.597951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.598860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.598870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.599934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.600938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.600947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.601919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.601929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.602084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.602094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.602300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.602310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.602530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.602541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.602800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.602810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.602946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.603045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.244 [2024-11-06 12:38:39.603055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.244 qpair failed and we were unable to recover it. 00:32:08.244 [2024-11-06 12:38:39.603257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.603402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.603492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.603573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.603745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.603917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.603927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.604963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.604972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.605842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.605852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.606118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.606129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.606286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.606296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.606446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.606456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.606712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.606723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.606867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.606876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.607864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.607873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.608958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.609935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.609944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 Malloc0 00:32:08.245 [2024-11-06 12:38:39.610092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.245 [2024-11-06 12:38:39.610102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.245 qpair failed and we were unable to recover it. 00:32:08.245 [2024-11-06 12:38:39.610241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.610251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.610336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.610346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.610553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.610564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.610711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.246 [2024-11-06 12:38:39.610722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.610928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.610939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.611005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.611015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.611096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:08.246 [2024-11-06 12:38:39.611106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.611339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.611349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.611411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.246 [2024-11-06 12:38:39.611423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.611581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.611592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.246 [2024-11-06 12:38:39.611794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.611804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.612963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.613958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.613968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.614847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.614856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.615003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.615014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.615166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.615179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.615332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.615341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.615478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.615489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.246 [2024-11-06 12:38:39.615622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.246 [2024-11-06 12:38:39.615632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.246 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.615764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.615774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.615999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.616219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.616315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.616575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.616755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.616841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.616850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.247 [2024-11-06 12:38:39.617637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.617943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.617952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.618914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.618924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.619894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.619904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.620990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.621140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.621150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.621323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.621333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.621560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.621570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.621795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.621804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.621968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.621977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.247 [2024-11-06 12:38:39.622157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.247 [2024-11-06 12:38:39.622166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.247 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.622253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.622263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.622445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.622455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.622663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.622673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.622769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.622779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.622916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.622925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.623153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.623163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.623229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.623238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.623404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.623414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.623674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.623684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.623924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.624858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.624868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.625847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.625858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.626017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.626128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.248 [2024-11-06 12:38:39.626338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.626428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.626550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:08.248 [2024-11-06 12:38:39.626769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.626928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.626938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.248 [2024-11-06 12:38:39.627153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.248 [2024-11-06 12:38:39.627328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.627419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.627508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.627761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.627928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.627939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.628932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.628942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.629030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.629040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.629136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.629146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.629319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.629329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.629469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.629480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.248 [2024-11-06 12:38:39.629577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.248 [2024-11-06 12:38:39.629589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.248 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.629727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.629737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.629836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.630890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.630900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.631862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.631872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.632935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.632945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.633913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.633924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.249 [2024-11-06 12:38:39.634298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:08.249 [2024-11-06 12:38:39.634676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.634961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.249 [2024-11-06 12:38:39.634975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.249 [2024-11-06 12:38:39.635250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.249 [2024-11-06 12:38:39.635718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.249 [2024-11-06 12:38:39.635727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.249 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.635880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.635890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.635969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.635979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.636869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.636878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.637946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2060000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.638978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.638989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.639883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.639893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.250 [2024-11-06 12:38:39.640820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.250 qpair failed and we were unable to recover it. 00:32:08.250 [2024-11-06 12:38:39.640955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.640964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.641891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.641901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.642261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.251 [2024-11-06 12:38:39.642422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.642575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.251 [2024-11-06 12:38:39.642722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.642802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.642919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.251 [2024-11-06 12:38:39.642999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.251 [2024-11-06 12:38:39.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.643954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.643965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.644979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.644989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.645163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.645318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.645488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:08.251 [2024-11-06 12:38:39.645672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f205c000b90 with addr=10.0.0.2, port=4420 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 [2024-11-06 12:38:39.645772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.251 [2024-11-06 12:38:39.648269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.251 [2024-11-06 12:38:39.648342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.251 [2024-11-06 12:38:39.648359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.251 [2024-11-06 12:38:39.648367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.251 [2024-11-06 12:38:39.648373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.251 [2024-11-06 12:38:39.648392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:08.251 [2024-11-06 12:38:39.658204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.251 [2024-11-06 12:38:39.658275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.251 [2024-11-06 12:38:39.658290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.251 [2024-11-06 12:38:39.658296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.251 [2024-11-06 12:38:39.658302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.251 [2024-11-06 12:38:39.658317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.251 qpair failed and we were unable to recover it. 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.251 12:38:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 358438 00:32:08.251 [2024-11-06 12:38:39.668200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.251 [2024-11-06 12:38:39.668302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.668315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.668321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.668327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.668341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.678127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.678186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.678199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.678205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.678210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.678224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.688179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.688240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.688254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.688260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.688265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.688279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.698202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.698275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.698288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.698295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.698300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.698313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.708220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.708288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.708301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.708308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.708313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.708327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.718168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.718228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.718241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.718247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.718252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.718266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.728300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.728366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.728379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.728385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.728391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.728405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.738348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.738412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.738436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.738443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.738448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.738471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.748400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.748493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.748506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.748512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.748518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.748532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.758315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.758372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.758385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.758391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.758397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.758411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.768386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.768463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.768476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.768481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.768487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.768501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.778461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.778543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.778556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.778565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.778570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.778583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.788434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.788497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.788510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.788516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.788521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.788535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.798466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.798521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.798533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.798539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.798545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.798559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.252 [2024-11-06 12:38:39.808520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.252 [2024-11-06 12:38:39.808597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.252 [2024-11-06 12:38:39.808610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.252 [2024-11-06 12:38:39.808616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.252 [2024-11-06 12:38:39.808622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.252 [2024-11-06 12:38:39.808636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.252 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.818557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.818613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.818626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.818632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.818637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.512 [2024-11-06 12:38:39.818654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.512 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.828602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.828656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.828669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.828675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.828680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.512 [2024-11-06 12:38:39.828695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.512 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.838540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.838592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.838605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.838611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.838616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.512 [2024-11-06 12:38:39.838630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.512 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.848630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.848715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.848727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.848733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.848738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.512 [2024-11-06 12:38:39.848752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.512 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.858621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.858681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.858693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.858699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.858704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.512 [2024-11-06 12:38:39.858718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.512 qpair failed and we were unable to recover it. 00:32:08.512 [2024-11-06 12:38:39.868609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.512 [2024-11-06 12:38:39.868674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.512 [2024-11-06 12:38:39.868686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.512 [2024-11-06 12:38:39.868692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.512 [2024-11-06 12:38:39.868697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.868711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.878684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.878753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.878765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.878771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.878776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.878790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.888722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.888783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.888795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.888801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.888806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.888820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.898785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.898889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.898901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.898907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.898912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.898926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.908803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.909068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.909082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.909092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.909097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.909112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.918771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.918824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.918837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.918843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.918848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.918862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.928900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.928957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.928969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.928975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.928981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.928995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.938894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.938951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.938963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.938969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.938975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.938988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.948832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.948891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.948904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.948910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.948916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.948932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.958884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.958939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.958952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.958958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.958963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.958976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.968971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.969032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.969044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.969050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.969055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.969069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.978999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.979059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.979071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.979077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.979082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.979095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.988937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.989000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.989013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.989019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.989024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.989038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.513 [2024-11-06 12:38:39.999007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.513 [2024-11-06 12:38:39.999060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.513 [2024-11-06 12:38:39.999073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.513 [2024-11-06 12:38:39.999079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.513 [2024-11-06 12:38:39.999084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.513 [2024-11-06 12:38:39.999099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.513 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.009134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.009225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.009241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.009249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.009256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.009273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.019125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.019190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.019203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.019209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.019215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.019229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.029194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.029268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.029283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.029291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.029297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.029313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.039064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.039125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.039144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.039150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.039156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.039170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.049204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.049269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.049282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.049289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.049294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.049308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.059217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.059275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.059289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.059295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.059300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.059314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.069211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.069281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.069297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.069304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.069311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.069327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.079165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.079267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.079280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.079286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.079294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.079309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.089325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.089391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.089404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.089410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.089416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.089430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.099278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.099345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.099359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.099365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.099370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.099384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.109335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.109404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.109417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.109423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.109429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.109443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.514 [2024-11-06 12:38:40.119349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.514 [2024-11-06 12:38:40.119406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.514 [2024-11-06 12:38:40.119420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.514 [2024-11-06 12:38:40.119426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.514 [2024-11-06 12:38:40.119431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.514 [2024-11-06 12:38:40.119445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.514 qpair failed and we were unable to recover it. 00:32:08.774 [2024-11-06 12:38:40.129389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.774 [2024-11-06 12:38:40.129447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.774 [2024-11-06 12:38:40.129465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.774 [2024-11-06 12:38:40.129472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.774 [2024-11-06 12:38:40.129478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.774 [2024-11-06 12:38:40.129492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.774 qpair failed and we were unable to recover it. 00:32:08.774 [2024-11-06 12:38:40.139378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.774 [2024-11-06 12:38:40.139439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.774 [2024-11-06 12:38:40.139453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.774 [2024-11-06 12:38:40.139463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.774 [2024-11-06 12:38:40.139469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.774 [2024-11-06 12:38:40.139483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.774 qpair failed and we were unable to recover it. 00:32:08.774 [2024-11-06 12:38:40.149476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.774 [2024-11-06 12:38:40.149541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.774 [2024-11-06 12:38:40.149554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.774 [2024-11-06 12:38:40.149560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.774 [2024-11-06 12:38:40.149565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.774 [2024-11-06 12:38:40.149579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.774 qpair failed and we were unable to recover it. 00:32:08.774 [2024-11-06 12:38:40.159474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.774 [2024-11-06 12:38:40.159529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.774 [2024-11-06 12:38:40.159542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.774 [2024-11-06 12:38:40.159548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.774 [2024-11-06 12:38:40.159554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.774 [2024-11-06 12:38:40.159568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.774 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.169516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.169580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.169595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.169601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.169607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.169620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.179493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.179546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.179559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.179565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.179570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.179584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.189499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.189561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.189574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.189580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.189586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.189599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.199597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.199652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.199664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.199671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.199676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.199691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.209647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.209713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.209726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.209732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.209741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.209754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.219617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.219677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.219689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.219695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.219700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.219714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.229747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.229838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.229850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.229856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.229861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.229875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.239702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.239760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.239773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.239779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.239784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.239797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.249793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.249853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.249866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.249872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.249877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.249890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.259757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.259813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.259825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.259831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.259836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.259849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.269849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.269905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.269918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.269924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.269930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.269943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.279734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.279795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.279808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.279814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.279819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.279833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.775 [2024-11-06 12:38:40.289842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.775 [2024-11-06 12:38:40.289912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.775 [2024-11-06 12:38:40.289924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.775 [2024-11-06 12:38:40.289930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.775 [2024-11-06 12:38:40.289935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.775 [2024-11-06 12:38:40.289948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.775 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.299896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.299961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.299976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.299982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.299988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.300001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.309932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.309990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.310002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.310008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.310014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.310028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.319903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.319956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.319968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.319973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.319978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.319992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.329991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.330055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.330067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.330073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.330079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.330092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.340047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.340112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.340125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.340134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.340139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.340153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.350034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.350094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.350106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.350112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.350118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.350131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.359957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.360015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.360027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.360033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.360038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.360052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.370104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.370165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.370177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.370183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.370189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.370202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:08.776 [2024-11-06 12:38:40.380126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:08.776 [2024-11-06 12:38:40.380183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:08.776 [2024-11-06 12:38:40.380195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:08.776 [2024-11-06 12:38:40.380201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:08.776 [2024-11-06 12:38:40.380207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:08.776 [2024-11-06 12:38:40.380223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:08.776 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.390144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.390216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.390229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.390235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.390240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.390253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.400131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.400184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.400197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.400203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.400208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.400222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.410214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.410275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.410287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.410292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.410298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.410312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.420266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.420325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.420337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.420343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.420348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.420362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.430268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.430338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.430351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.430356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.430361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.430375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.440245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.440301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.440313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.440318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.036 [2024-11-06 12:38:40.440324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.036 [2024-11-06 12:38:40.440337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.036 qpair failed and we were unable to recover it. 00:32:09.036 [2024-11-06 12:38:40.450439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.036 [2024-11-06 12:38:40.450513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.036 [2024-11-06 12:38:40.450526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.036 [2024-11-06 12:38:40.450532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.450537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.450551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.460463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.460527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.460540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.460546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.460551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.460564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.470432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.470491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.470504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.470513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.470518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.470532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.480415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.480475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.480488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.480494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.480499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.480513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.490439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.490505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.490519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.490525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.490531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.490545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.500507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.500568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.500583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.500589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.500595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.500609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.510521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.510610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.510623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.510629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.510634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.510651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.520416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.520477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.520490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.520495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.520501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.520514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.530549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.530614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.530627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.530633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.530638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.530652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.540631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.540725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.540738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.540744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.540749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.540762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.550621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.550678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.550691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.550697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.550702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.550715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.560582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.560638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.560650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.560656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.560661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.560675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.570684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.570748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.570760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.570767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.570772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.037 [2024-11-06 12:38:40.570786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.037 qpair failed and we were unable to recover it. 00:32:09.037 [2024-11-06 12:38:40.580726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.037 [2024-11-06 12:38:40.580782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.037 [2024-11-06 12:38:40.580794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.037 [2024-11-06 12:38:40.580800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.037 [2024-11-06 12:38:40.580806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.580819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.590720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.590791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.590803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.590809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.590815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.590828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.600685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.600749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.600764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.600770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.600776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.600789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.610783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.610842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.610854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.610860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.610865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.610879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.620788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.620852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.620864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.620870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.620876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.620890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.630840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.630897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.630909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.630915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.630920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.630934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.640829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.640884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.640896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.640901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.640910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.640923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.038 [2024-11-06 12:38:40.650931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.038 [2024-11-06 12:38:40.651039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.038 [2024-11-06 12:38:40.651051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.038 [2024-11-06 12:38:40.651057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.038 [2024-11-06 12:38:40.651063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.038 [2024-11-06 12:38:40.651076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.038 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.660993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.297 [2024-11-06 12:38:40.661056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.297 [2024-11-06 12:38:40.661068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.297 [2024-11-06 12:38:40.661075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.297 [2024-11-06 12:38:40.661080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.297 [2024-11-06 12:38:40.661093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.297 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.670952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.297 [2024-11-06 12:38:40.671009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.297 [2024-11-06 12:38:40.671021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.297 [2024-11-06 12:38:40.671027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.297 [2024-11-06 12:38:40.671033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.297 [2024-11-06 12:38:40.671046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.297 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.680921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.297 [2024-11-06 12:38:40.680975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.297 [2024-11-06 12:38:40.680987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.297 [2024-11-06 12:38:40.680993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.297 [2024-11-06 12:38:40.680999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.297 [2024-11-06 12:38:40.681013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.297 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.691037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.297 [2024-11-06 12:38:40.691103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.297 [2024-11-06 12:38:40.691115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.297 [2024-11-06 12:38:40.691121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.297 [2024-11-06 12:38:40.691126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.297 [2024-11-06 12:38:40.691140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.297 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.701040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.297 [2024-11-06 12:38:40.701128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.297 [2024-11-06 12:38:40.701140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.297 [2024-11-06 12:38:40.701145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.297 [2024-11-06 12:38:40.701151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.297 [2024-11-06 12:38:40.701165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.297 qpair failed and we were unable to recover it. 00:32:09.297 [2024-11-06 12:38:40.710988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.711047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.711059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.711065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.711071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.711084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.721041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.721097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.721110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.721116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.721121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.721135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.731130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.731192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.731210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.731216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.731221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.731235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.741154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.741231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.741244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.741250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.741256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.741270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.751200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.751260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.751273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.751279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.751284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.751298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.761160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.761214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.761227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.761233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.761238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.761251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.771236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.771312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.771325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.771331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.771339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.771354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.781276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.781338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.781350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.781357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.781362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.781375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.791343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.791404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.791417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.791423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.791428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.791441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.801266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.801322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.801334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.801340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.801345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.801359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.811354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.811412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.811424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.811430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.811435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.811448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.821414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.821473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.821487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.821493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.821498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.821512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.831417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.298 [2024-11-06 12:38:40.831476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.298 [2024-11-06 12:38:40.831489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.298 [2024-11-06 12:38:40.831495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.298 [2024-11-06 12:38:40.831500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.298 [2024-11-06 12:38:40.831514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.298 qpair failed and we were unable to recover it. 00:32:09.298 [2024-11-06 12:38:40.841395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.841455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.841471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.841477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.841482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.841495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.851475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.851532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.851544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.851550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.851556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.851569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.861500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.861611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.861628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.861634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.861639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.861653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.871520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.871593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.871606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.871612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.871617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.871630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.881502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.881556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.881568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.881574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.881579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.881592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.891601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.891662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.891674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.891680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.891685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.891699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.901612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.901669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.901682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.901690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.901696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.901709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.299 [2024-11-06 12:38:40.911622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.299 [2024-11-06 12:38:40.911678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.299 [2024-11-06 12:38:40.911690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.299 [2024-11-06 12:38:40.911696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.299 [2024-11-06 12:38:40.911702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.299 [2024-11-06 12:38:40.911715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.299 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.921613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.921670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.559 [2024-11-06 12:38:40.921682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.559 [2024-11-06 12:38:40.921688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.559 [2024-11-06 12:38:40.921693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.559 [2024-11-06 12:38:40.921706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.559 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.931703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.931779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.559 [2024-11-06 12:38:40.931792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.559 [2024-11-06 12:38:40.931798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.559 [2024-11-06 12:38:40.931803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.559 [2024-11-06 12:38:40.931817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.559 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.941732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.941791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.559 [2024-11-06 12:38:40.941803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.559 [2024-11-06 12:38:40.941809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.559 [2024-11-06 12:38:40.941815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.559 [2024-11-06 12:38:40.941832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.559 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.951748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.951812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.559 [2024-11-06 12:38:40.951825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.559 [2024-11-06 12:38:40.951831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.559 [2024-11-06 12:38:40.951836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.559 [2024-11-06 12:38:40.951850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.559 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.961703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.961759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.559 [2024-11-06 12:38:40.961771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.559 [2024-11-06 12:38:40.961776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.559 [2024-11-06 12:38:40.961782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.559 [2024-11-06 12:38:40.961796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.559 qpair failed and we were unable to recover it. 00:32:09.559 [2024-11-06 12:38:40.971819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.559 [2024-11-06 12:38:40.971911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:40.971923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:40.971928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:40.971934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:40.971947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:40.981845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:40.981905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:40.981917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:40.981923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:40.981929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:40.981943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:40.991866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:40.991934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:40.991947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:40.991953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:40.991958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:40.991971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.001851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.001905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.001917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.001923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.001928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.001941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.011853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.011911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.011923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.011929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.011934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.011948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.021981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.022037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.022049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.022055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.022060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.022074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.031987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.032061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.032074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.032083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.032088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.032101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.041999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.042094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.042106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.042112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.042117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.042131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.051956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.052021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.052033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.052039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.052044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.052057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.062069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.062174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.062186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.062192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.062197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.062211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.072150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.072212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.072224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.072230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.072235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.072252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.082117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.082194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.082206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.082212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.082217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.082230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.092158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.560 [2024-11-06 12:38:41.092218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.560 [2024-11-06 12:38:41.092230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.560 [2024-11-06 12:38:41.092236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.560 [2024-11-06 12:38:41.092241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.560 [2024-11-06 12:38:41.092254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.560 qpair failed and we were unable to recover it. 00:32:09.560 [2024-11-06 12:38:41.102195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.102257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.102269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.102275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.102281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.102294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.112220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.112282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.112294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.112300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.112305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.112320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.122224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.122278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.122290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.122296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.122301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.122315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.132263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.132322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.132334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.132340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.132346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.132359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.142300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.142370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.142383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.142389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.142394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.142408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.152349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.152409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.152422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.152428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.152434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.152447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.162319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.162383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.162397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.162403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.162408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.162422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.561 [2024-11-06 12:38:41.172393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.561 [2024-11-06 12:38:41.172476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.561 [2024-11-06 12:38:41.172488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.561 [2024-11-06 12:38:41.172494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.561 [2024-11-06 12:38:41.172500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.561 [2024-11-06 12:38:41.172514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.561 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.182389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.182450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.182466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.182472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.182477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.182491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.192449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.192514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.192526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.192532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.192537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.192550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.202439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.202502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.202514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.202520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.202528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.202541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.212540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.212599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.212611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.212618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.212623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.212637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.222569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.222632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.222644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.222651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.222656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.222670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.232585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.232641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.232653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.232661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.232667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.232682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.242563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.821 [2024-11-06 12:38:41.242619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.821 [2024-11-06 12:38:41.242632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.821 [2024-11-06 12:38:41.242638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.821 [2024-11-06 12:38:41.242644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.821 [2024-11-06 12:38:41.242657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.821 qpair failed and we were unable to recover it. 00:32:09.821 [2024-11-06 12:38:41.252629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.252703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.252716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.252722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.252727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.252741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.262613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.262669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.262681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.262687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.262692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.262705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.272712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.272768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.272780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.272786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.272792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.272805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.282725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.282783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.282796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.282802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.282807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.282821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.292768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.292836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.292851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.292857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.292863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.292876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.302808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.302874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.302886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.302893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.302898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.302912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.312812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.312921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.312934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.312940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.312946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.312960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.322774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.322829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.322841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.322847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.322853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.322866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.332883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.332948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.332960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.332966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.332974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.332987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.342926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.822 [2024-11-06 12:38:41.342986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.822 [2024-11-06 12:38:41.342999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.822 [2024-11-06 12:38:41.343005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.822 [2024-11-06 12:38:41.343011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.822 [2024-11-06 12:38:41.343024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.822 qpair failed and we were unable to recover it. 00:32:09.822 [2024-11-06 12:38:41.352935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.352989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.353001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.353007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.353012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.353025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.362913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.362966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.362979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.362985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.362990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.363004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.372981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.373040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.373052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.373058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.373064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.373077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.383003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.383072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.383085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.383090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.383096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.383110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.393083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.393140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.393152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.393158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.393163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.393176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.403062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.403118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.403130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.403136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.403141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.403155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.413114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.413175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.413188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.413193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.413199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.413211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.423058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.423124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.423137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.423143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.423149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.423162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:09.823 [2024-11-06 12:38:41.433142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:09.823 [2024-11-06 12:38:41.433200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:09.823 [2024-11-06 12:38:41.433212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:09.823 [2024-11-06 12:38:41.433218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:09.823 [2024-11-06 12:38:41.433223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:09.823 [2024-11-06 12:38:41.433236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:09.823 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.443133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.083 [2024-11-06 12:38:41.443187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.083 [2024-11-06 12:38:41.443200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.083 [2024-11-06 12:38:41.443206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.083 [2024-11-06 12:38:41.443211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.083 [2024-11-06 12:38:41.443224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.083 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.453178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.083 [2024-11-06 12:38:41.453241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.083 [2024-11-06 12:38:41.453253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.083 [2024-11-06 12:38:41.453259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.083 [2024-11-06 12:38:41.453264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.083 [2024-11-06 12:38:41.453278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.083 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.463210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.083 [2024-11-06 12:38:41.463287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.083 [2024-11-06 12:38:41.463300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.083 [2024-11-06 12:38:41.463311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.083 [2024-11-06 12:38:41.463317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.083 [2024-11-06 12:38:41.463330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.083 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.473239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.083 [2024-11-06 12:38:41.473312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.083 [2024-11-06 12:38:41.473324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.083 [2024-11-06 12:38:41.473330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.083 [2024-11-06 12:38:41.473336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.083 [2024-11-06 12:38:41.473349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.083 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.483188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.083 [2024-11-06 12:38:41.483268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.083 [2024-11-06 12:38:41.483280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.083 [2024-11-06 12:38:41.483286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.083 [2024-11-06 12:38:41.483292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.083 [2024-11-06 12:38:41.483305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.083 qpair failed and we were unable to recover it. 00:32:10.083 [2024-11-06 12:38:41.493330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.493396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.493408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.493415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.493420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.493433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.503411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.503472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.503484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.503490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.503496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.503513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.513348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.513410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.513422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.513428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.513433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.513446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.523363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.523419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.523431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.523437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.523443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.523457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.533364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.533425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.533437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.533443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.533448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.533466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.543401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.543466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.543479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.543487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.543494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.543509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.553488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.553549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.553562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.553567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.553573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.553587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.563448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.563513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.563525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.563531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.563537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.563551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.573504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.573576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.573588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.573594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.573600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.573613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.583562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.583623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.583635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.583642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.583647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.583661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.593670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.593755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.593766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.593776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.593781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.593794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.084 qpair failed and we were unable to recover it. 00:32:10.084 [2024-11-06 12:38:41.603527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.084 [2024-11-06 12:38:41.603585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.084 [2024-11-06 12:38:41.603597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.084 [2024-11-06 12:38:41.603603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.084 [2024-11-06 12:38:41.603609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.084 [2024-11-06 12:38:41.603622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.613673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.613731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.613743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.613748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.613754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.613767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.623657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.623718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.623730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.623736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.623741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.623754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.633673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.633736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.633748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.633754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.633759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.633776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.643776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.643832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.643844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.643850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.643855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.643869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.653738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.653800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.653812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.653819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.653824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.653838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.663769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.663829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.663841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.663847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.663852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.663865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.673852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.673915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.673928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.673934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.673939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.673953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.683850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.683911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.683923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.683929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.683935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.683948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.085 [2024-11-06 12:38:41.693950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.085 [2024-11-06 12:38:41.694010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.085 [2024-11-06 12:38:41.694023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.085 [2024-11-06 12:38:41.694029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.085 [2024-11-06 12:38:41.694034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.085 [2024-11-06 12:38:41.694048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.085 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.703865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.703927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.703938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.703944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.703950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.703962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.713966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.714027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.714040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.714046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.714051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.714065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.723978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.724034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.724049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.724055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.724060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.724074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.733952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.734020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.734032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.734038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.734043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.734057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.744012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.744076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.744089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.744095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.744100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.744114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.754081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.754163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.754176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.754182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.754187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.754201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.764046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.764129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.764141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.764147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.764156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.764169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.774140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.774204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.774216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.774222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.774227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.774240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.784174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.784258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.784270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.784276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.784281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.784294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.794171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.794233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.794245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.794251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.794257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.794271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.804166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.804220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.804232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.804239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.804244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.804257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.345 qpair failed and we were unable to recover it. 00:32:10.345 [2024-11-06 12:38:41.814232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.345 [2024-11-06 12:38:41.814293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.345 [2024-11-06 12:38:41.814305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.345 [2024-11-06 12:38:41.814312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.345 [2024-11-06 12:38:41.814318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.345 [2024-11-06 12:38:41.814331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.824334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.824395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.824408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.824414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.824420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.824434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.834326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.834386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.834399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.834405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.834411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.834425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.844338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.844392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.844404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.844410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.844415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.844429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.854308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.854372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.854387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.854393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.854398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.854412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.864402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.864467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.864480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.864486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.864491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.864505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.874351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.874417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.874430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.874436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.874442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.874455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.884407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.884498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.884510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.884516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.884522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.884536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.894497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.894566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.894578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.894584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.894593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.894607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.904499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.904559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.904571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.904577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.904582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.904595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.914451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.914519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.914532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.914538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.914543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.914558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.924571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.924667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.924679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.924684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.924690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.924703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.934595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.346 [2024-11-06 12:38:41.934653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.346 [2024-11-06 12:38:41.934665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.346 [2024-11-06 12:38:41.934671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.346 [2024-11-06 12:38:41.934676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.346 [2024-11-06 12:38:41.934689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.346 qpair failed and we were unable to recover it. 00:32:10.346 [2024-11-06 12:38:41.944633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.347 [2024-11-06 12:38:41.944696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.347 [2024-11-06 12:38:41.944708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.347 [2024-11-06 12:38:41.944714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.347 [2024-11-06 12:38:41.944719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.347 [2024-11-06 12:38:41.944733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.347 qpair failed and we were unable to recover it. 00:32:10.347 [2024-11-06 12:38:41.954647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.347 [2024-11-06 12:38:41.954707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.347 [2024-11-06 12:38:41.954720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.347 [2024-11-06 12:38:41.954726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.347 [2024-11-06 12:38:41.954731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.347 [2024-11-06 12:38:41.954744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.347 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:41.964611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:41.964668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:41.964681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:41.964687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:41.964692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:41.964705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:41.974636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:41.974698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:41.974709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:41.974716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:41.974721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:41.974734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:41.984802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:41.984875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:41.984887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:41.984893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:41.984898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:41.984911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:41.994790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:41.994854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:41.994866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:41.994872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:41.994877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:41.994891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:42.004748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:42.004803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:42.004815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:42.004822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:42.004827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:42.004841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:42.014858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:42.014920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:42.014932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:42.014939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:42.014944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:42.014957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.606 [2024-11-06 12:38:42.024878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.606 [2024-11-06 12:38:42.024976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.606 [2024-11-06 12:38:42.024988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.606 [2024-11-06 12:38:42.024997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.606 [2024-11-06 12:38:42.025003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.606 [2024-11-06 12:38:42.025017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.606 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.034812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.034873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.034885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.034891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.034896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.034909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.044875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.044927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.044939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.044945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.044950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.044964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.054974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.055037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.055049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.055055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.055060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.055073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.064995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.065052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.065064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.065070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.065075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.065092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.075013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.075084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.075097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.075103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.075108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.075122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.084982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.085037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.085049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.085055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.085060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.085074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.095077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.095141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.095153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.095160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.095165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.095179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.105118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.105180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.105192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.105198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.105204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.105217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.115045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.115122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.115134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.115140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.115146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.115159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.125100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.125174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.125186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.125192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.125197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.125211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.135160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.135221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.135233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.135239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.135244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.135257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.145220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.145315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.145327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.145333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.145338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.145351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.607 [2024-11-06 12:38:42.155235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.607 [2024-11-06 12:38:42.155301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.607 [2024-11-06 12:38:42.155316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.607 [2024-11-06 12:38:42.155322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.607 [2024-11-06 12:38:42.155328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.607 [2024-11-06 12:38:42.155342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.607 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.165214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.165271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.165283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.165289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.165295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.165308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.175304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.175370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.175382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.175388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.175394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.175408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.185319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.185377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.185389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.185396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.185401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.185415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.195349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.195410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.195422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.195429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.195434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.195451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.205318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.205373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.205385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.205391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.205397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.205410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.608 [2024-11-06 12:38:42.215410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.608 [2024-11-06 12:38:42.215469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.608 [2024-11-06 12:38:42.215482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.608 [2024-11-06 12:38:42.215488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.608 [2024-11-06 12:38:42.215493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.608 [2024-11-06 12:38:42.215507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.608 qpair failed and we were unable to recover it. 00:32:10.895 [2024-11-06 12:38:42.225442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.225505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.225518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.225524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.225529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.225542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.235496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.235585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.235597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.235603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.235608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.235622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.245360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.245414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.245427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.245433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.245439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.245453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.255554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.255611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.255624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.255629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.255635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.255648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.265553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.265615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.265628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.265634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.265639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.265652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.275632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.275711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.275723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.275729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.275735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.275748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.285551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.285603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.285619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.285625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.285631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.285645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.295670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.295729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.295741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.295747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.295753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.295766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.305678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.305756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.305768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.305773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.305779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.305793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.315654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.315728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.315740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.315746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.315751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.315764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.325705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.325803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.325815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.325821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.325829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.325843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.335743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.335827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.335840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.335846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.335851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.335865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.896 [2024-11-06 12:38:42.345801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.896 [2024-11-06 12:38:42.345855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.896 [2024-11-06 12:38:42.345867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.896 [2024-11-06 12:38:42.345872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.896 [2024-11-06 12:38:42.345878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.896 [2024-11-06 12:38:42.345891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.896 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.355791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.355890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.355902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.355908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.355914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.355927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.365771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.365823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.365835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.365841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.365846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.365860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.375778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.375837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.375848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.375854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.375860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.375873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.385885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.385939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.385952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.385957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.385963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.385977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.395919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.395975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.395986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.395992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.395998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.396011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.405892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.405945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.405957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.405963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.405968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.405981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.415964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.416023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.416038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.416044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.416050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.416062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.426014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.426074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.426086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.426091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.426097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.426110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.436025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.436128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.436140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.436146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.436152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.436166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.445993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.446062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.446074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.446080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.446085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.446099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.456196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.456292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.456305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.456314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.456319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.456333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.466180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.466234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.466247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.466252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.466259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.466273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.897 qpair failed and we were unable to recover it. 00:32:10.897 [2024-11-06 12:38:42.476237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.897 [2024-11-06 12:38:42.476322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.897 [2024-11-06 12:38:42.476334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.897 [2024-11-06 12:38:42.476340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.897 [2024-11-06 12:38:42.476345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.897 [2024-11-06 12:38:42.476359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.898 qpair failed and we were unable to recover it. 00:32:10.898 [2024-11-06 12:38:42.486141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.898 [2024-11-06 12:38:42.486197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.898 [2024-11-06 12:38:42.486208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.898 [2024-11-06 12:38:42.486214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.898 [2024-11-06 12:38:42.486220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.898 [2024-11-06 12:38:42.486234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.898 qpair failed and we were unable to recover it. 00:32:10.898 [2024-11-06 12:38:42.496209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.898 [2024-11-06 12:38:42.496263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.898 [2024-11-06 12:38:42.496276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.898 [2024-11-06 12:38:42.496282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.898 [2024-11-06 12:38:42.496287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.898 [2024-11-06 12:38:42.496301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.898 qpair failed and we were unable to recover it. 00:32:10.898 [2024-11-06 12:38:42.506239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:10.898 [2024-11-06 12:38:42.506297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:10.898 [2024-11-06 12:38:42.506310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:10.898 [2024-11-06 12:38:42.506316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:10.898 [2024-11-06 12:38:42.506321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:10.898 [2024-11-06 12:38:42.506335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:10.898 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.516234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.516306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.516318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.516324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.516330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.516343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.526237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.526292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.526304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.526310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.526315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.526329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.536313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.536400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.536412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.536418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.536423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.536437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.546351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.546417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.546429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.546435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.546440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.546454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.556345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.556406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.556419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.556426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.556432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.556445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.566340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.566395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.566408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.566414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.566419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.566432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.576355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.576416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.576429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.576436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.576441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.576455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.586492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.586588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.586600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.586610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.586615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.586629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.596493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.596558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.596570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.596577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.596582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.596596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.606379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.606475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.606487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.606493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.606499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.606513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.616514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.616611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.616624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.157 [2024-11-06 12:38:42.616630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.157 [2024-11-06 12:38:42.616635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.157 [2024-11-06 12:38:42.616649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.157 qpair failed and we were unable to recover it. 00:32:11.157 [2024-11-06 12:38:42.626568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.157 [2024-11-06 12:38:42.626627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.157 [2024-11-06 12:38:42.626639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.626645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.626650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.626667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.636654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.636742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.636755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.636761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.636766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.636779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.646566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.646624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.646636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.646642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.646647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.646660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.656690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.656752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.656765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.656771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.656776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.656789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.666599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.666662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.666675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.666681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.666686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.666699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.676733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.676794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.676807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.676813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.676819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.676832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.686677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.686730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.686742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.686748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.686753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.686767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.696766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.696868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.696880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.696886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.696892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.696906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.706803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.706866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.706878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.706884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.706889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.706903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.716853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.716946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.716961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.716968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.716973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.716987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.726789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.726845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.726857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.726863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.726868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.726882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.736883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.736941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.736954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.736960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.736965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.736979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.746882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.746938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.746951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.746957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.158 [2024-11-06 12:38:42.746963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.158 [2024-11-06 12:38:42.746976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.158 qpair failed and we were unable to recover it. 00:32:11.158 [2024-11-06 12:38:42.756995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.158 [2024-11-06 12:38:42.757050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.158 [2024-11-06 12:38:42.757062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.158 [2024-11-06 12:38:42.757068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.159 [2024-11-06 12:38:42.757073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.159 [2024-11-06 12:38:42.757090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.159 [2024-11-06 12:38:42.766914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.159 [2024-11-06 12:38:42.766967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.159 [2024-11-06 12:38:42.766979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.159 [2024-11-06 12:38:42.766985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.159 [2024-11-06 12:38:42.766991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.159 [2024-11-06 12:38:42.767004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.159 qpair failed and we were unable to recover it. 00:32:11.418 [2024-11-06 12:38:42.776995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.777052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.777065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.777071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.777076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.777090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.786995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.787051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.787063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.787069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.787075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.787088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.797045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.797147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.797158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.797164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.797170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.797183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.807023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.807075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.807087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.807093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.807099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.807112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.817113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.817183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.817195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.817201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.817206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.817220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.827149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.827207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.827219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.827225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.827230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.827244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.837179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.837237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.837249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.837256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.837261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.837274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.847149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.847205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.847220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.847226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.847231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.847245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.857233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.857310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.857322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.857328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.857333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.857347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.867271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.867327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.867340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.867345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.867351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.867364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.877313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.877372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.877384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.877389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.877395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.877408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.887271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.887325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.887337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.887343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.887351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.887364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.419 [2024-11-06 12:38:42.897334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.419 [2024-11-06 12:38:42.897396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.419 [2024-11-06 12:38:42.897409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.419 [2024-11-06 12:38:42.897415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.419 [2024-11-06 12:38:42.897420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.419 [2024-11-06 12:38:42.897434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.419 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.907374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.907454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.907471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.907477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.907482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.907496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.917426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.917488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.917501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.917508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.917513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.917528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.927359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.927415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.927427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.927433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.927438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.927452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.937400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.937476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.937489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.937495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.937500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.937515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.947503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.947602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.947614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.947620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.947625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.947639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.957519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.957579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.957593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.957598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.957604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.957618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.967502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.967557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.967570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.967577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.967582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.967596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.977583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.977643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.977660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.977667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.977672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.977685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.987664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.987721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.987734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.987740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.987745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.987758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:42.997604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:42.997665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:42.997678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:42.997685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:42.997690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:42.997704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:43.007608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:43.007661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:43.007673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:43.007679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:43.007684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:43.007698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:43.017611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:43.017671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:43.017683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:43.017692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:43.017698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:43.017711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.420 [2024-11-06 12:38:43.027693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.420 [2024-11-06 12:38:43.027782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.420 [2024-11-06 12:38:43.027794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.420 [2024-11-06 12:38:43.027800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.420 [2024-11-06 12:38:43.027805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.420 [2024-11-06 12:38:43.027819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.420 qpair failed and we were unable to recover it. 00:32:11.680 [2024-11-06 12:38:43.037746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.680 [2024-11-06 12:38:43.037808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.037820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.037826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.037831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.037844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.047724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.047795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.047808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.047814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.047820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.047833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.057805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.057868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.057881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.057887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.057892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.057906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.067834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.067902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.067915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.067921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.067926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.067940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.077842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.077901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.077913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.077920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.077925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.077939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.087767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.087820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.087833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.087839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.087845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.087859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.097941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.097999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.098012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.098018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.098023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.098037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.107955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.108021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.108034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.108040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.108045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.108059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.117996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.118055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.118067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.118074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.118079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.118092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.127962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.128018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.128030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.128036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.128041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.128055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.138034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.138097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.138109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.138116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.138121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.138134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.148060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.148124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.148137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.148146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.148152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.148166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.158092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.681 [2024-11-06 12:38:43.158151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.681 [2024-11-06 12:38:43.158163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.681 [2024-11-06 12:38:43.158169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.681 [2024-11-06 12:38:43.158175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.681 [2024-11-06 12:38:43.158188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.681 qpair failed and we were unable to recover it. 00:32:11.681 [2024-11-06 12:38:43.168067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.168120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.168132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.168138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.168143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.168157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.178156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.178214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.178226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.178232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.178237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.178251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.188209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.188269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.188282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.188288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.188294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.188311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.198224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.198281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.198294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.198300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.198305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.198319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.208223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.208278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.208290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.208296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.208302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.208315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.218317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.218375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.218386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.218392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.218398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.218411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.228325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.228382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.228395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.228401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.228406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.228420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.238288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.238346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.238358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.238364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.238370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.238383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.248295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.248350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.248362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.248368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.248373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.248386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.258382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.258439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.258452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.258461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.258466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.258480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.268446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.268530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.268542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.268548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.268553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.268567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.278434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.278501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.278516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.278523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.278528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.278542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.682 [2024-11-06 12:38:43.288427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.682 [2024-11-06 12:38:43.288507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.682 [2024-11-06 12:38:43.288520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.682 [2024-11-06 12:38:43.288526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.682 [2024-11-06 12:38:43.288532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.682 [2024-11-06 12:38:43.288545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.682 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.298510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.298571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.298583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.298589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.298594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.298608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.308531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.308587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.308599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.308605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.308610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.308624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.318544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.318640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.318653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.318659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.318667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.318681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.328518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.328573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.328585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.328591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.328596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.328610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.338606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.338664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.338677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.338683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.338688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.338701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.348656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.348712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.348725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.348731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.348737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.348751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.358604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.358668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.942 [2024-11-06 12:38:43.358680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.942 [2024-11-06 12:38:43.358686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.942 [2024-11-06 12:38:43.358692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.942 [2024-11-06 12:38:43.358705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.942 qpair failed and we were unable to recover it. 00:32:11.942 [2024-11-06 12:38:43.368681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.942 [2024-11-06 12:38:43.368739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.368751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.368757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.368762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.368776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.378731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.378794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.378806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.378812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.378817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.378830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.388816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.388900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.388912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.388918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.388923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.388937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.398769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.398829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.398841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.398848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.398853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.398866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.408705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.408778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.408794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.408800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.408805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.408819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.418853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.418912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.418924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.418930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.418935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.418948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.428892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.428954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.428967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.428973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.428978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.428992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.438931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.439005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.439017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.439023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.439028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.439042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.448915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.448969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.448981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.448987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.448995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.449009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.458967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.459025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.459038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.459043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.459049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.459062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.468946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.469004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.469016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.469023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.469028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.469041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.479027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.479086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.479097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.479104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.479109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.479123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.489014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.489117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.489130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.943 [2024-11-06 12:38:43.489136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.943 [2024-11-06 12:38:43.489142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.943 [2024-11-06 12:38:43.489156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.943 qpair failed and we were unable to recover it. 00:32:11.943 [2024-11-06 12:38:43.499060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.943 [2024-11-06 12:38:43.499120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.943 [2024-11-06 12:38:43.499133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.499140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.499145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.499159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:11.944 [2024-11-06 12:38:43.509087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.944 [2024-11-06 12:38:43.509162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.944 [2024-11-06 12:38:43.509175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.509182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.509187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.509201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:11.944 [2024-11-06 12:38:43.519135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.944 [2024-11-06 12:38:43.519195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.944 [2024-11-06 12:38:43.519208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.519214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.519219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.519232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:11.944 [2024-11-06 12:38:43.529124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.944 [2024-11-06 12:38:43.529228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.944 [2024-11-06 12:38:43.529240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.529246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.529251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.529265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:11.944 [2024-11-06 12:38:43.539181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.944 [2024-11-06 12:38:43.539247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.944 [2024-11-06 12:38:43.539263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.539269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.539274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.539287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:11.944 [2024-11-06 12:38:43.549253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:11.944 [2024-11-06 12:38:43.549350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:11.944 [2024-11-06 12:38:43.549362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:11.944 [2024-11-06 12:38:43.549368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:11.944 [2024-11-06 12:38:43.549373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:11.944 [2024-11-06 12:38:43.549387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.944 qpair failed and we were unable to recover it. 00:32:12.203 [2024-11-06 12:38:43.559271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.203 [2024-11-06 12:38:43.559336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.203 [2024-11-06 12:38:43.559349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.203 [2024-11-06 12:38:43.559355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.203 [2024-11-06 12:38:43.559360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.203 [2024-11-06 12:38:43.559373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.203 qpair failed and we were unable to recover it. 00:32:12.203 [2024-11-06 12:38:43.569149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.203 [2024-11-06 12:38:43.569206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.203 [2024-11-06 12:38:43.569218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.203 [2024-11-06 12:38:43.569224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.203 [2024-11-06 12:38:43.569230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.203 [2024-11-06 12:38:43.569243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.203 qpair failed and we were unable to recover it. 00:32:12.203 [2024-11-06 12:38:43.579297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.203 [2024-11-06 12:38:43.579355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.203 [2024-11-06 12:38:43.579367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.203 [2024-11-06 12:38:43.579376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.203 [2024-11-06 12:38:43.579381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.203 [2024-11-06 12:38:43.579395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.203 qpair failed and we were unable to recover it. 00:32:12.203 [2024-11-06 12:38:43.589333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.203 [2024-11-06 12:38:43.589400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.203 [2024-11-06 12:38:43.589413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.203 [2024-11-06 12:38:43.589419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.203 [2024-11-06 12:38:43.589425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.203 [2024-11-06 12:38:43.589438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.203 qpair failed and we were unable to recover it. 00:32:12.203 [2024-11-06 12:38:43.599409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.203 [2024-11-06 12:38:43.599493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.203 [2024-11-06 12:38:43.599505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.203 [2024-11-06 12:38:43.599511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.203 [2024-11-06 12:38:43.599517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.203 [2024-11-06 12:38:43.599530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.609330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.609383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.609395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.609401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.609406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.609419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.619416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.619487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.619499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.619505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.619511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.619524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.629436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.629543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.629555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.629561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.629567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.629581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.639481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.639538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.639551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.639558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.639563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.639578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.649445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.649514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.649526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.649532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.649538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.649551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.659512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.659597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.659609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.659615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.659620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.659634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.669580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.669674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.669686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.669692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.669697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.669711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.679585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.679658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.679670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.679676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.679681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.679695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.689565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.689624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.689637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.689643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.689648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.689662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.699644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.699719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.699731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.204 [2024-11-06 12:38:43.699737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.204 [2024-11-06 12:38:43.699743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.204 [2024-11-06 12:38:43.699756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.204 qpair failed and we were unable to recover it. 00:32:12.204 [2024-11-06 12:38:43.709680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.204 [2024-11-06 12:38:43.709737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.204 [2024-11-06 12:38:43.709749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.709760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.709766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.709780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.719703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.719761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.719773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.719779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.719784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.719798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.729674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.729737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.729749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.729756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.729761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.729774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.739762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.739822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.739835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.739841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.739846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.739860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.749795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.749855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.749867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.749873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.749878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.749895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.759811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.759914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.759925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.759932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.759937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.759950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.769786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.769893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.769905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.769911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.769917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.769930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.779864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.779925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.779938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.779944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.779949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.779963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.789933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.789991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.790003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.790009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.790015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.790028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.799912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.799974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.799987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.799993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.799998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.205 [2024-11-06 12:38:43.800012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.205 qpair failed and we were unable to recover it. 00:32:12.205 [2024-11-06 12:38:43.809893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.205 [2024-11-06 12:38:43.809953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.205 [2024-11-06 12:38:43.809966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.205 [2024-11-06 12:38:43.809972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.205 [2024-11-06 12:38:43.809977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.206 [2024-11-06 12:38:43.809991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.206 qpair failed and we were unable to recover it. 00:32:12.464 [2024-11-06 12:38:43.819983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.464 [2024-11-06 12:38:43.820047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.464 [2024-11-06 12:38:43.820060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.464 [2024-11-06 12:38:43.820065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.464 [2024-11-06 12:38:43.820071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.820084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.830013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.830077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.830094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.830101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.830106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.830123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.840039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.840121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.840136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.840142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.840148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.840163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.850016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.850073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.850085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.850091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.850097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.850110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.860098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.860166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.860179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.860185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.860190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.860204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.870129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.870229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.870242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.870248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.870253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.870267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.880154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.880221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.880233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.880239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.880247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.880261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.890157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.890259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.890272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.890278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.890283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.890297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.900255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.900313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.900325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.900331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.900336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.900350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.910239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.910302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.910314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.910320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.910325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.910339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.920260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.920322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.920334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.920340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.920345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.920359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.930273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.930328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.930340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.930346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.930352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.930366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.940330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.940391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.940404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.940410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.940415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.465 [2024-11-06 12:38:43.940429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.465 qpair failed and we were unable to recover it. 00:32:12.465 [2024-11-06 12:38:43.950354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.465 [2024-11-06 12:38:43.950416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.465 [2024-11-06 12:38:43.950429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.465 [2024-11-06 12:38:43.950435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.465 [2024-11-06 12:38:43.950440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:43.950453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:43.960438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:43.960506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:43.960521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:43.960527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:43.960532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:43.960546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:43.970350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:43.970405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:43.970421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:43.970428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:43.970434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:43.970447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:43.980425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:43.980487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:43.980500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:43.980506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:43.980512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:43.980526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:43.990473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:43.990570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:43.990582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:43.990588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:43.990593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:43.990608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.000477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.000552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.000565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.000571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.000577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.000591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.010472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.010528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.010541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.010547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.010556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.010569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.020578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.020643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.020655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.020662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.020668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.020681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.030583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.030648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.030660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.030666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.030672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.030685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.040611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.040706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.040718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.040725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.040730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.040744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.050582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.050640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.050652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.050658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.050664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.050678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.060658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.060725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.060737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.060743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.060749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.060762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.466 [2024-11-06 12:38:44.070702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.466 [2024-11-06 12:38:44.070758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.466 [2024-11-06 12:38:44.070771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.466 [2024-11-06 12:38:44.070777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.466 [2024-11-06 12:38:44.070783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.466 [2024-11-06 12:38:44.070796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.466 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.080748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.080804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.080816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.080822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.080827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.080841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.090701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.090756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.090768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.090774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.090779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.090793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.100794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.100866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.100881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.100887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.100892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.100906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.110742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.110800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.110812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.110818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.110824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.110838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.120835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.120941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.120953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.120960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.120965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.120979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.130807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.130861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.130874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.130879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.130884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.130897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.140905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.140976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.140989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.140997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.141002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.141016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.150945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.151004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.151017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.151023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.151029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.151042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.160948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.161010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.161022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.161029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.161034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.161047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.170923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.170977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.170989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.170995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.171000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.171014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.181016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.181080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.181092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.181098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.181103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.181116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.191050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.726 [2024-11-06 12:38:44.191113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.726 [2024-11-06 12:38:44.191125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.726 [2024-11-06 12:38:44.191131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.726 [2024-11-06 12:38:44.191136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.726 [2024-11-06 12:38:44.191149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.726 qpair failed and we were unable to recover it. 00:32:12.726 [2024-11-06 12:38:44.201063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.201132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.201144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.201151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.201156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.201169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.211039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.211114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.211127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.211133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.211138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.211152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.221120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.221176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.221189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.221195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.221200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.221214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.231194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.231255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.231268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.231274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.231279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.231293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.241207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.241281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.241293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.241299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.241304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.241317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.251215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.251310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.251323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.251329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.251334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.251348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.261282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.261347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.261360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.261366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.261371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.261384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.271298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.271380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.271393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.271402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.271407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.271420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.281230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.281291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.281304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.281310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.281315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.281328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.291327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.291387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.291400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.291406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.291412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.291426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.301362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.301420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.301432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.301438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.301444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.301457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.311419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.311511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.311523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.311529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.311535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.311553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.321417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.727 [2024-11-06 12:38:44.321482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.727 [2024-11-06 12:38:44.321495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.727 [2024-11-06 12:38:44.321501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.727 [2024-11-06 12:38:44.321506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.727 [2024-11-06 12:38:44.321520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.727 qpair failed and we were unable to recover it. 00:32:12.727 [2024-11-06 12:38:44.331428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.728 [2024-11-06 12:38:44.331531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.728 [2024-11-06 12:38:44.331543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.728 [2024-11-06 12:38:44.331549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.728 [2024-11-06 12:38:44.331555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.728 [2024-11-06 12:38:44.331568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.341484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.341590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.341602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.341608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.341613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.341627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.351512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.351572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.351585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.351592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.351597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.351611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.361529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.361587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.361599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.361605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.361610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.361623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.371542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.371640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.371652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.371658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.371663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.371677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.381581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.381640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.381652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.381657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.381663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.381676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.391621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.391681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.391693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.391700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.391705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.391719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.401653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.987 [2024-11-06 12:38:44.401718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.987 [2024-11-06 12:38:44.401734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.987 [2024-11-06 12:38:44.401740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.987 [2024-11-06 12:38:44.401746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.987 [2024-11-06 12:38:44.401759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.987 qpair failed and we were unable to recover it. 00:32:12.987 [2024-11-06 12:38:44.411637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.411696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.411709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.411715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.411720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.411733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.421673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.421752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.421764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.421770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.421776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.421789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.431737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.431798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.431811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.431817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.431823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.431836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.441738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.441798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.441810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.441816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.441825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.441838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.451740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.451813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.451825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.451831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.451836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.451850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.461817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.461877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.461890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.461895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.461901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.461914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.471890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.471982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.471995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.472001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.472006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.472021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.481825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.481886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.481899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.481905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.481910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.481923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.491759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.491815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.491828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.491834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.491839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.491853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.501852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.501942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.501954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.501960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.501965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.501978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.511941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.511997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.512009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.512015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.512020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.512033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.521982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.522042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.522055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.522061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.522066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.522079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.988 qpair failed and we were unable to recover it. 00:32:12.988 [2024-11-06 12:38:44.531905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.988 [2024-11-06 12:38:44.531964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.988 [2024-11-06 12:38:44.531979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.988 [2024-11-06 12:38:44.531985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.988 [2024-11-06 12:38:44.531991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.988 [2024-11-06 12:38:44.532004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.542032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.542095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.542108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.542114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.542119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.542132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.552078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.552137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.552149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.552155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.552161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.552174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.562002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.562059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.562071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.562077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.562082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.562096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.572076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.572133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.572145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.572150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.572159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.572172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.582157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.582223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.582235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.582242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.582247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.582260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.592190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.592276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.592288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.592294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.592299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.592312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:12.989 [2024-11-06 12:38:44.602133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:12.989 [2024-11-06 12:38:44.602191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:12.989 [2024-11-06 12:38:44.602202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:12.989 [2024-11-06 12:38:44.602208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.989 [2024-11-06 12:38:44.602214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:12.989 [2024-11-06 12:38:44.602227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:12.989 qpair failed and we were unable to recover it. 00:32:13.248 [2024-11-06 12:38:44.612116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.248 [2024-11-06 12:38:44.612170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.248 [2024-11-06 12:38:44.612182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.248 [2024-11-06 12:38:44.612188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.248 [2024-11-06 12:38:44.612193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.248 [2024-11-06 12:38:44.612206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.248 qpair failed and we were unable to recover it. 00:32:13.248 [2024-11-06 12:38:44.622193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.248 [2024-11-06 12:38:44.622264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.248 [2024-11-06 12:38:44.622277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.248 [2024-11-06 12:38:44.622283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.248 [2024-11-06 12:38:44.622288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.248 [2024-11-06 12:38:44.622301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.248 qpair failed and we were unable to recover it. 00:32:13.248 [2024-11-06 12:38:44.632260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.248 [2024-11-06 12:38:44.632338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.248 [2024-11-06 12:38:44.632351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.248 [2024-11-06 12:38:44.632357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.248 [2024-11-06 12:38:44.632362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.248 [2024-11-06 12:38:44.632376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.248 qpair failed and we were unable to recover it. 00:32:13.248 [2024-11-06 12:38:44.642298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.248 [2024-11-06 12:38:44.642353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.248 [2024-11-06 12:38:44.642366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.248 [2024-11-06 12:38:44.642372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.248 [2024-11-06 12:38:44.642377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.248 [2024-11-06 12:38:44.642391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.248 qpair failed and we were unable to recover it. 00:32:13.248 [2024-11-06 12:38:44.652196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.248 [2024-11-06 12:38:44.652252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.652264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.652270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.652276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.652290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.662387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.662447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.662468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.662474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.662480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.662493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.672384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.672444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.672456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.672467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.672472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.672486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.682428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.682514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.682526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.682532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.682537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.682552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.692394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.692448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.692464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.692471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.692476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.692489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.702396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.702465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.702478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.702486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.702492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.702506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.712514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.712572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.712584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.712590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.712596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.712609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.722517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.722616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.722628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.722634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.722639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.722653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.732528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.732582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.732595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.732601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.732607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.732620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.742516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.742607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.742619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.742625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.742630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.742647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.752631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.752731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.752744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.752749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.752755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.752769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.762615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.762690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.762702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.762708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.762713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.762726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.772627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.772682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.249 [2024-11-06 12:38:44.772694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.249 [2024-11-06 12:38:44.772699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.249 [2024-11-06 12:38:44.772705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.249 [2024-11-06 12:38:44.772719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.249 qpair failed and we were unable to recover it. 00:32:13.249 [2024-11-06 12:38:44.782624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.249 [2024-11-06 12:38:44.782687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.782700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.782705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.782711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.782723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.792661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.792744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.792756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.792762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.792768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.792781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.802767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.802827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.802840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.802846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.802851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.802864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.812746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.812796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.812810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.812816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.812821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.812836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.822810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.822873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.822886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.822892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.822898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.822911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.832890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.832981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.832993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.833002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.833008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.833021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.842855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.842913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.842926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.842931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.842937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.842950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.852850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.852906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.852919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.852925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.852930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.852944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.250 [2024-11-06 12:38:44.862841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.250 [2024-11-06 12:38:44.862904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.250 [2024-11-06 12:38:44.862916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.250 [2024-11-06 12:38:44.862923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.250 [2024-11-06 12:38:44.862928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.250 [2024-11-06 12:38:44.862942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.250 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.872875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.509 [2024-11-06 12:38:44.872933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.509 [2024-11-06 12:38:44.872945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.509 [2024-11-06 12:38:44.872951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.509 [2024-11-06 12:38:44.872957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.509 [2024-11-06 12:38:44.872973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.509 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.882986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.509 [2024-11-06 12:38:44.883041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.509 [2024-11-06 12:38:44.883054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.509 [2024-11-06 12:38:44.883060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.509 [2024-11-06 12:38:44.883065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.509 [2024-11-06 12:38:44.883079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.509 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.892982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.509 [2024-11-06 12:38:44.893048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.509 [2024-11-06 12:38:44.893061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.509 [2024-11-06 12:38:44.893067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.509 [2024-11-06 12:38:44.893073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.509 [2024-11-06 12:38:44.893087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.509 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.903042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.509 [2024-11-06 12:38:44.903105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.509 [2024-11-06 12:38:44.903117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.509 [2024-11-06 12:38:44.903123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.509 [2024-11-06 12:38:44.903128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.509 [2024-11-06 12:38:44.903142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.509 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.913071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.509 [2024-11-06 12:38:44.913126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.509 [2024-11-06 12:38:44.913138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.509 [2024-11-06 12:38:44.913144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.509 [2024-11-06 12:38:44.913150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.509 [2024-11-06 12:38:44.913163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.509 qpair failed and we were unable to recover it. 00:32:13.509 [2024-11-06 12:38:44.923053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.923115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.923128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.923134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.923139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.923152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.933010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.933065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.933078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.933084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.933089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.933102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.943157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.943214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.943226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.943232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.943237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.943250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.953178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.953238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.953251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.953257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.953262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.953275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.963206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.963269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.963284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.963290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.963295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.963309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.973112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.973166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.973178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.973185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.973190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.973203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.983276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.983350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.983362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.983368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.983374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.983387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:44.993301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:44.993355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:44.993368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:44.993373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:44.993379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:44.993393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.003351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:45.003442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:45.003454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:45.003464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:45.003472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:45.003485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.013300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:45.013353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:45.013365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:45.013371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:45.013376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:45.013390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.023393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:45.023453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:45.023469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:45.023475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:45.023480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:45.023494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.033476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:45.033560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:45.033572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:45.033578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:45.033583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:45.033597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.043352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.510 [2024-11-06 12:38:45.043419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.510 [2024-11-06 12:38:45.043432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.510 [2024-11-06 12:38:45.043438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.510 [2024-11-06 12:38:45.043444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.510 [2024-11-06 12:38:45.043462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.510 qpair failed and we were unable to recover it. 00:32:13.510 [2024-11-06 12:38:45.053413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.053471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.053484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.053491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.053496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.053510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.063598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.063663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.063675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.063682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.063688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.063702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.073598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.073655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.073668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.073674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.073680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.073693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.083652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.083738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.083751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.083757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.083763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.083776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.093564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.093626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.093644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.093650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.093655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.093669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.103624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.103687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.103699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.103706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.103711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.103724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.113649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.113704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.113716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.113722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.113728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.113740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.511 [2024-11-06 12:38:45.123677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.511 [2024-11-06 12:38:45.123735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.511 [2024-11-06 12:38:45.123747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.511 [2024-11-06 12:38:45.123753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.511 [2024-11-06 12:38:45.123759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.511 [2024-11-06 12:38:45.123773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.511 qpair failed and we were unable to recover it. 00:32:13.770 [2024-11-06 12:38:45.133671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.770 [2024-11-06 12:38:45.133776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.770 [2024-11-06 12:38:45.133788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.770 [2024-11-06 12:38:45.133794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.770 [2024-11-06 12:38:45.133803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.770 [2024-11-06 12:38:45.133817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.770 qpair failed and we were unable to recover it. 00:32:13.770 [2024-11-06 12:38:45.143726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.770 [2024-11-06 12:38:45.143791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.770 [2024-11-06 12:38:45.143803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.143809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.143814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.143827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.153699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.153757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.153769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.153775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.153781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.153795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.163788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.163857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.163870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.163875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.163881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.163895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.173759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.173825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.173837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.173844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.173849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.173863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.183751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.183827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.183840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.183846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.183851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.183864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.193781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.193840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.193852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.193858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.193864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.193877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.203950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.204011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.204023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.204030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.204035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.204049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.213871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.213927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.213939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.213945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.213951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.213964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.223949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.224018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.224033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.224039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.224044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.224057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.234010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.234103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.234115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.234120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.234126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.234139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.244009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.244076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.244088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.244094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.244099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.244113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.253987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.254040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.254052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.254058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.254063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.254077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.264001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.771 [2024-11-06 12:38:45.264097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.771 [2024-11-06 12:38:45.264109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.771 [2024-11-06 12:38:45.264118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.771 [2024-11-06 12:38:45.264124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.771 [2024-11-06 12:38:45.264137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.771 qpair failed and we were unable to recover it. 00:32:13.771 [2024-11-06 12:38:45.274089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.274171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.274183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.274189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.274194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.274208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.284128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.284185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.284197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.284203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.284208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.284222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.294106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.294157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.294171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.294177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.294183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.294196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.304194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.304257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.304268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.304274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.304279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.304296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.314234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.314292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.314305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.314311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.314317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.314330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.324280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.324340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.324353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.324359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.324364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.324378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.334230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.334296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.334309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.334315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.334321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.334334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.344306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.344362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.344374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.344380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.344385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.344399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.354352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.354421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.354435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.354441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.354446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.354463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.364363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.364440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.364452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.364463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.364469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.364482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.374340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.374392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.374405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.374410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.374416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.374429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:13.772 [2024-11-06 12:38:45.384424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:13.772 [2024-11-06 12:38:45.384491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:13.772 [2024-11-06 12:38:45.384504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:13.772 [2024-11-06 12:38:45.384510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:13.772 [2024-11-06 12:38:45.384515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:13.772 [2024-11-06 12:38:45.384529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.772 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.394439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.394500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.394512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.394521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.394526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.394539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.404486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.404547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.404559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.404565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.404570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.404584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.414395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.414450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.414468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.414474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.414480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.414493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.424495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.424574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.424586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.424592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.424598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.424611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.434499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.434553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.434566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.434571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.434577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.434594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.444612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.444685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.444698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.444704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.444709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.037 [2024-11-06 12:38:45.444722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.037 qpair failed and we were unable to recover it. 00:32:14.037 [2024-11-06 12:38:45.454582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.037 [2024-11-06 12:38:45.454634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.037 [2024-11-06 12:38:45.454646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.037 [2024-11-06 12:38:45.454652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.037 [2024-11-06 12:38:45.454657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.454670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.038 qpair failed and we were unable to recover it. 00:32:14.038 [2024-11-06 12:38:45.464668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.038 [2024-11-06 12:38:45.464736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.038 [2024-11-06 12:38:45.464749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.038 [2024-11-06 12:38:45.464755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.038 [2024-11-06 12:38:45.464760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.464773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.038 qpair failed and we were unable to recover it. 00:32:14.038 [2024-11-06 12:38:45.474709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.038 [2024-11-06 12:38:45.474774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.038 [2024-11-06 12:38:45.474785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.038 [2024-11-06 12:38:45.474792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.038 [2024-11-06 12:38:45.474797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.474810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.038 qpair failed and we were unable to recover it. 00:32:14.038 [2024-11-06 12:38:45.484720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.038 [2024-11-06 12:38:45.484775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.038 [2024-11-06 12:38:45.484787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.038 [2024-11-06 12:38:45.484793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.038 [2024-11-06 12:38:45.484799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.484812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.038 qpair failed and we were unable to recover it. 00:32:14.038 [2024-11-06 12:38:45.494634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.038 [2024-11-06 12:38:45.494689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.038 [2024-11-06 12:38:45.494701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.038 [2024-11-06 12:38:45.494706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.038 [2024-11-06 12:38:45.494712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.494726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.038 qpair failed and we were unable to recover it. 00:32:14.038 [2024-11-06 12:38:45.504798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.038 [2024-11-06 12:38:45.504865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.038 [2024-11-06 12:38:45.504878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.038 [2024-11-06 12:38:45.504883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.038 [2024-11-06 12:38:45.504889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.038 [2024-11-06 12:38:45.504902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.039 qpair failed and we were unable to recover it. 00:32:14.039 [2024-11-06 12:38:45.514861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.039 [2024-11-06 12:38:45.514921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.039 [2024-11-06 12:38:45.514934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.039 [2024-11-06 12:38:45.514940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.039 [2024-11-06 12:38:45.514945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.039 [2024-11-06 12:38:45.514959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.039 qpair failed and we were unable to recover it. 00:32:14.039 [2024-11-06 12:38:45.524851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.039 [2024-11-06 12:38:45.524909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.039 [2024-11-06 12:38:45.524923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.039 [2024-11-06 12:38:45.524929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.039 [2024-11-06 12:38:45.524935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.039 [2024-11-06 12:38:45.524948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.039 qpair failed and we were unable to recover it. 00:32:14.039 [2024-11-06 12:38:45.534779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.039 [2024-11-06 12:38:45.534834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.039 [2024-11-06 12:38:45.534847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.039 [2024-11-06 12:38:45.534853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.039 [2024-11-06 12:38:45.534859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.039 [2024-11-06 12:38:45.534872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.039 qpair failed and we were unable to recover it. 00:32:14.039 [2024-11-06 12:38:45.544917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.039 [2024-11-06 12:38:45.544979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.039 [2024-11-06 12:38:45.544991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.039 [2024-11-06 12:38:45.544997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.039 [2024-11-06 12:38:45.545002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.039 [2024-11-06 12:38:45.545016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.039 qpair failed and we were unable to recover it. 00:32:14.039 [2024-11-06 12:38:45.554925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.039 [2024-11-06 12:38:45.554983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.042 [2024-11-06 12:38:45.554994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.042 [2024-11-06 12:38:45.555000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.042 [2024-11-06 12:38:45.555006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.042 [2024-11-06 12:38:45.555020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.042 qpair failed and we were unable to recover it. 00:32:14.042 [2024-11-06 12:38:45.564948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.042 [2024-11-06 12:38:45.565007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.042 [2024-11-06 12:38:45.565019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.042 [2024-11-06 12:38:45.565025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.042 [2024-11-06 12:38:45.565033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.042 [2024-11-06 12:38:45.565047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.042 qpair failed and we were unable to recover it. 00:32:14.042 [2024-11-06 12:38:45.574937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.574991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.575004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.575010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.575015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.575028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.585005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.585070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.585082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.585088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.585094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.585107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.595042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.595104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.595117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.595123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.595128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.595141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.605061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.605124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.605137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.605143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.605148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.605161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.615077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.615132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.615145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.615150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.615156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.615170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.625119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.625181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.625194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.625201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.625206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.625220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.635163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.635219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.635231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.635238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.635243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.635257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.043 [2024-11-06 12:38:45.645162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.043 [2024-11-06 12:38:45.645222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.043 [2024-11-06 12:38:45.645235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.043 [2024-11-06 12:38:45.645241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.043 [2024-11-06 12:38:45.645247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.043 [2024-11-06 12:38:45.645261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.043 qpair failed and we were unable to recover it. 00:32:14.302 [2024-11-06 12:38:45.655077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.302 [2024-11-06 12:38:45.655132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.302 [2024-11-06 12:38:45.655147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.302 [2024-11-06 12:38:45.655153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.302 [2024-11-06 12:38:45.655159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.302 [2024-11-06 12:38:45.655172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.302 qpair failed and we were unable to recover it. 00:32:14.302 [2024-11-06 12:38:45.665271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.302 [2024-11-06 12:38:45.665329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.302 [2024-11-06 12:38:45.665342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.302 [2024-11-06 12:38:45.665347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.302 [2024-11-06 12:38:45.665353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f205c000b90 00:32:14.302 [2024-11-06 12:38:45.665366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:14.302 qpair failed and we were unable to recover it. 00:32:14.302 [2024-11-06 12:38:45.675254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.302 [2024-11-06 12:38:45.675318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.302 [2024-11-06 12:38:45.675339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.302 [2024-11-06 12:38:45.675346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.302 [2024-11-06 12:38:45.675352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2068000b90 00:32:14.302 [2024-11-06 12:38:45.675369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:14.302 qpair failed and we were unable to recover it. 00:32:14.302 [2024-11-06 12:38:45.685304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.302 [2024-11-06 12:38:45.685365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.302 [2024-11-06 12:38:45.685379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.302 [2024-11-06 12:38:45.685386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.302 [2024-11-06 12:38:45.685392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2068000b90 00:32:14.302 [2024-11-06 12:38:45.685406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:14.302 qpair failed and we were unable to recover it. 00:32:14.302 [2024-11-06 12:38:45.695282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.302 [2024-11-06 12:38:45.695358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.302 [2024-11-06 12:38:45.695377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.302 [2024-11-06 12:38:45.695385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.303 [2024-11-06 12:38:45.695394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2060000b90 00:32:14.303 [2024-11-06 12:38:45.695411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.303 qpair failed and we were unable to recover it. 00:32:14.303 [2024-11-06 12:38:45.705362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.303 [2024-11-06 12:38:45.705425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.303 [2024-11-06 12:38:45.705440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.303 [2024-11-06 12:38:45.705447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.303 [2024-11-06 12:38:45.705453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2060000b90 00:32:14.303 [2024-11-06 12:38:45.705471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.303 qpair failed and we were unable to recover it. 00:32:14.303 [2024-11-06 12:38:45.715409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:14.303 [2024-11-06 12:38:45.715523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:14.303 [2024-11-06 12:38:45.715572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:14.303 [2024-11-06 12:38:45.715594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:14.303 [2024-11-06 12:38:45.715613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d87550 00:32:14.303 [2024-11-06 12:38:45.715655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:14.303 qpair failed and we were unable to recover it. 00:32:14.303 [2024-11-06 12:38:45.715693] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:14.303 A controller has encountered a failure and is being reset. 00:32:14.303 Controller properly reset. 00:32:14.303 Initializing NVMe Controllers 00:32:14.303 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:14.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:14.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:14.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:14.303 Initialization complete. Launching workers. 00:32:14.303 Starting thread on core 1 00:32:14.303 Starting thread on core 2 00:32:14.303 Starting thread on core 3 00:32:14.303 Starting thread on core 0 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:14.303 00:32:14.303 real 0m10.964s 00:32:14.303 user 0m19.094s 00:32:14.303 sys 0m4.475s 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 ************************************ 00:32:14.303 END TEST nvmf_target_disconnect_tc2 00:32:14.303 ************************************ 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.303 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.303 rmmod nvme_tcp 00:32:14.303 rmmod nvme_fabrics 00:32:14.303 rmmod nvme_keyring 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 359231 ']' 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 359231 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 359231 ']' 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 359231 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:14.561 12:38:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 359231 00:32:14.561 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:32:14.561 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:32:14.561 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 359231' 00:32:14.561 killing process with pid 359231 00:32:14.562 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 359231 00:32:14.562 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 359231 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.820 12:38:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:16.724 00:32:16.724 real 0m19.704s 00:32:16.724 user 0m47.595s 00:32:16.724 sys 0m9.327s 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:16.724 ************************************ 00:32:16.724 END TEST nvmf_target_disconnect 00:32:16.724 ************************************ 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:16.724 00:32:16.724 real 6m1.208s 00:32:16.724 user 11m30.515s 00:32:16.724 sys 1m52.705s 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.724 12:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.724 ************************************ 00:32:16.724 END TEST nvmf_host 00:32:16.724 ************************************ 00:32:16.724 12:38:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:16.724 12:38:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:16.724 12:38:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:16.724 12:38:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:16.724 12:38:48 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.724 12:38:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.983 ************************************ 00:32:16.983 START TEST nvmf_target_core_interrupt_mode 00:32:16.983 ************************************ 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:16.983 * Looking for test storage... 00:32:16.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:16.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.983 --rc genhtml_branch_coverage=1 00:32:16.983 --rc genhtml_function_coverage=1 00:32:16.983 --rc genhtml_legend=1 00:32:16.983 --rc geninfo_all_blocks=1 00:32:16.983 --rc geninfo_unexecuted_blocks=1 00:32:16.983 00:32:16.983 ' 00:32:16.983 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:16.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.983 --rc genhtml_branch_coverage=1 00:32:16.984 --rc genhtml_function_coverage=1 00:32:16.984 --rc genhtml_legend=1 00:32:16.984 --rc geninfo_all_blocks=1 00:32:16.984 --rc geninfo_unexecuted_blocks=1 00:32:16.984 00:32:16.984 ' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:16.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.984 --rc genhtml_branch_coverage=1 00:32:16.984 --rc genhtml_function_coverage=1 00:32:16.984 --rc genhtml_legend=1 00:32:16.984 --rc geninfo_all_blocks=1 00:32:16.984 --rc geninfo_unexecuted_blocks=1 00:32:16.984 00:32:16.984 ' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:16.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.984 --rc genhtml_branch_coverage=1 00:32:16.984 --rc genhtml_function_coverage=1 00:32:16.984 --rc genhtml_legend=1 00:32:16.984 --rc geninfo_all_blocks=1 00:32:16.984 --rc geninfo_unexecuted_blocks=1 00:32:16.984 00:32:16.984 ' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.984 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.243 ************************************ 00:32:17.243 START TEST nvmf_abort 00:32:17.243 ************************************ 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:17.243 * Looking for test storage... 00:32:17.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.243 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:17.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.243 --rc genhtml_branch_coverage=1 00:32:17.243 --rc genhtml_function_coverage=1 00:32:17.243 --rc genhtml_legend=1 00:32:17.243 --rc geninfo_all_blocks=1 00:32:17.244 --rc geninfo_unexecuted_blocks=1 00:32:17.244 00:32:17.244 ' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:17.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.244 --rc genhtml_branch_coverage=1 00:32:17.244 --rc genhtml_function_coverage=1 00:32:17.244 --rc genhtml_legend=1 00:32:17.244 --rc geninfo_all_blocks=1 00:32:17.244 --rc geninfo_unexecuted_blocks=1 00:32:17.244 00:32:17.244 ' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:17.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.244 --rc genhtml_branch_coverage=1 00:32:17.244 --rc genhtml_function_coverage=1 00:32:17.244 --rc genhtml_legend=1 00:32:17.244 --rc geninfo_all_blocks=1 00:32:17.244 --rc geninfo_unexecuted_blocks=1 00:32:17.244 00:32:17.244 ' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:17.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.244 --rc genhtml_branch_coverage=1 00:32:17.244 --rc genhtml_function_coverage=1 00:32:17.244 --rc genhtml_legend=1 00:32:17.244 --rc geninfo_all_blocks=1 00:32:17.244 --rc geninfo_unexecuted_blocks=1 00:32:17.244 00:32:17.244 ' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.244 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:22.512 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:22.512 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.512 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:22.513 Found net devices under 0000:af:00.0: cvl_0_0 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:22.513 Found net devices under 0000:af:00.1: cvl_0_1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.513 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:32:22.513 00:32:22.513 --- 10.0.0.2 ping statistics --- 00:32:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.513 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:32:22.513 00:32:22.513 --- 10.0.0.1 ping statistics --- 00:32:22.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.513 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=364088 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 364088 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 364088 ']' 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.513 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:22.513 [2024-11-06 12:38:54.107429] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:22.513 [2024-11-06 12:38:54.108770] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:32:22.513 [2024-11-06 12:38:54.108812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.773 [2024-11-06 12:38:54.179575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:22.773 [2024-11-06 12:38:54.220053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.773 [2024-11-06 12:38:54.220085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.773 [2024-11-06 12:38:54.220091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.773 [2024-11-06 12:38:54.220097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.773 [2024-11-06 12:38:54.220101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.773 [2024-11-06 12:38:54.221372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.773 [2024-11-06 12:38:54.221485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.773 [2024-11-06 12:38:54.221487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.773 [2024-11-06 12:38:54.287449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:22.773 [2024-11-06 12:38:54.287477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:22.773 [2024-11-06 12:38:54.287557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:22.773 [2024-11-06 12:38:54.287664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.773 [2024-11-06 12:38:54.374021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.773 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 Malloc0 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 Delay0 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 [2024-11-06 12:38:54.438069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.032 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:23.032 [2024-11-06 12:38:54.523956] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:25.566 Initializing NVMe Controllers 00:32:25.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:25.566 controller IO queue size 128 less than required 00:32:25.566 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:25.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:25.566 Initialization complete. Launching workers. 00:32:25.566 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24238 00:32:25.566 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24295, failed to submit 66 00:32:25.566 success 24238, unsuccessful 57, failed 0 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.566 rmmod nvme_tcp 00:32:25.566 rmmod nvme_fabrics 00:32:25.566 rmmod nvme_keyring 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 364088 ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 364088 ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 364088' 00:32:25.566 killing process with pid 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 364088 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.566 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.471 00:32:27.471 real 0m10.403s 00:32:27.471 user 0m10.003s 00:32:27.471 sys 0m5.107s 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:27.471 ************************************ 00:32:27.471 END TEST nvmf_abort 00:32:27.471 ************************************ 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:27.471 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:27.731 ************************************ 00:32:27.731 START TEST nvmf_ns_hotplug_stress 00:32:27.731 ************************************ 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:27.731 * Looking for test storage... 00:32:27.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.731 --rc genhtml_branch_coverage=1 00:32:27.731 --rc genhtml_function_coverage=1 00:32:27.731 --rc genhtml_legend=1 00:32:27.731 --rc geninfo_all_blocks=1 00:32:27.731 --rc geninfo_unexecuted_blocks=1 00:32:27.731 00:32:27.731 ' 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.731 --rc genhtml_branch_coverage=1 00:32:27.731 --rc genhtml_function_coverage=1 00:32:27.731 --rc genhtml_legend=1 00:32:27.731 --rc geninfo_all_blocks=1 00:32:27.731 --rc geninfo_unexecuted_blocks=1 00:32:27.731 00:32:27.731 ' 00:32:27.731 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.731 --rc genhtml_branch_coverage=1 00:32:27.731 --rc genhtml_function_coverage=1 00:32:27.731 --rc genhtml_legend=1 00:32:27.731 --rc geninfo_all_blocks=1 00:32:27.731 --rc geninfo_unexecuted_blocks=1 00:32:27.731 00:32:27.731 ' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.732 --rc genhtml_branch_coverage=1 00:32:27.732 --rc genhtml_function_coverage=1 00:32:27.732 --rc genhtml_legend=1 00:32:27.732 --rc geninfo_all_blocks=1 00:32:27.732 --rc geninfo_unexecuted_blocks=1 00:32:27.732 00:32:27.732 ' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.732 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:33.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:33.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:33.006 Found net devices under 0000:af:00.0: cvl_0_0 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:33.006 Found net devices under 0000:af:00.1: cvl_0_1 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.006 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.266 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.266 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.266 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.266 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:32:33.267 00:32:33.267 --- 10.0.0.2 ping statistics --- 00:32:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.267 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:32:33.267 00:32:33.267 --- 10.0.0.1 ping statistics --- 00:32:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.267 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:33.267 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=368215 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 368215 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 368215 ']' 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.526 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:33.526 [2024-11-06 12:39:04.947052] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.526 [2024-11-06 12:39:04.948368] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:32:33.526 [2024-11-06 12:39:04.948411] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.526 [2024-11-06 12:39:05.020258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:33.526 [2024-11-06 12:39:05.060784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.526 [2024-11-06 12:39:05.060815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.526 [2024-11-06 12:39:05.060822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.526 [2024-11-06 12:39:05.060827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.526 [2024-11-06 12:39:05.060832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.526 [2024-11-06 12:39:05.062175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.526 [2024-11-06 12:39:05.062250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.526 [2024-11-06 12:39:05.062252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.526 [2024-11-06 12:39:05.128626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.526 [2024-11-06 12:39:05.128649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.526 [2024-11-06 12:39:05.128708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:33.526 [2024-11-06 12:39:05.128810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:33.785 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:34.043 [2024-11-06 12:39:05.474941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.043 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:34.302 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.561 [2024-11-06 12:39:06.014899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.561 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.819 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:35.078 Malloc0 00:32:35.079 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:35.337 Delay0 00:32:35.337 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.595 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:35.854 NULL1 00:32:35.854 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:36.112 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=368729 00:32:36.112 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:36.112 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:36.112 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.370 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.629 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:36.629 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:36.887 true 00:32:36.887 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:36.887 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.145 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.403 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:37.403 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:37.662 true 00:32:37.662 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:37.662 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.920 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.178 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:38.178 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:38.437 true 00:32:38.437 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:38.437 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.695 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.954 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:38.954 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:39.212 true 00:32:39.212 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:39.212 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.471 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.729 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:39.729 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:39.988 true 00:32:39.988 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:39.988 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.247 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.815 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:40.815 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:40.815 true 00:32:40.815 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:40.815 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.074 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.333 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:41.333 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:41.591 true 00:32:41.591 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:41.591 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.850 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.108 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:42.108 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:42.366 true 00:32:42.366 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:42.366 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.625 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.884 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:42.884 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:43.142 true 00:32:43.142 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:43.142 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.401 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.660 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:43.660 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:43.918 true 00:32:43.918 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:43.918 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.177 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.435 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:44.435 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:44.693 true 00:32:44.693 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:44.693 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.952 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.211 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:45.211 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:45.469 true 00:32:45.469 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:45.469 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.727 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.985 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:45.985 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:46.243 true 00:32:46.243 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:46.243 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.501 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.760 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:46.760 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:47.019 true 00:32:47.019 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:47.019 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.587 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.588 12:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:47.588 12:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:47.846 true 00:32:47.846 12:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:47.846 12:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.413 12:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.413 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:48.413 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:48.671 true 00:32:48.671 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:48.671 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.930 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.496 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:49.496 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:49.496 true 00:32:49.496 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:49.496 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.754 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.321 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:50.321 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:50.321 true 00:32:50.321 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:50.321 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.635 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.932 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:50.932 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:51.218 true 00:32:51.218 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:51.218 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.477 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.735 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:51.735 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:51.994 true 00:32:51.994 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:51.994 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.254 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.512 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:52.512 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:52.771 true 00:32:52.771 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:52.771 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.029 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.287 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:53.287 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:53.545 true 00:32:53.545 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:53.545 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.804 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.062 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:54.062 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:54.320 true 00:32:54.320 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:54.320 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.579 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.838 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:54.838 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:55.097 true 00:32:55.097 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:55.097 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.355 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.614 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:55.614 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:55.873 true 00:32:55.873 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:55.873 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.131 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.389 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:56.389 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:56.647 true 00:32:56.647 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:56.647 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.906 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.165 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:57.165 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:57.424 true 00:32:57.424 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:57.424 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.682 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.941 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:57.941 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:58.200 true 00:32:58.200 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:58.200 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.461 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.719 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:58.719 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:58.719 true 00:32:58.719 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:58.719 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.287 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.287 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:59.287 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:59.546 true 00:32:59.546 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:32:59.546 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.113 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.113 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:00.113 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:00.372 true 00:33:00.372 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:00.372 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.630 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.889 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:00.889 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:01.147 true 00:33:01.147 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:01.147 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.405 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.664 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:33:01.664 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:33:01.923 true 00:33:01.923 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:01.923 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.489 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.489 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:33:02.489 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:33:02.746 true 00:33:02.746 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:02.746 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.005 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.572 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:33:03.572 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:33:03.572 true 00:33:03.572 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:03.572 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.831 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.089 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:33:04.347 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:33:04.347 true 00:33:04.605 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:04.605 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.863 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.863 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:33:04.863 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:33:05.122 true 00:33:05.122 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:05.122 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.381 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.639 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:33:05.639 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:33:05.897 true 00:33:05.897 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:05.897 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.156 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.415 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:33:06.415 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:33:06.674 Initializing NVMe Controllers 00:33:06.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:06.674 Controller IO queue size 128, less than required. 00:33:06.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:06.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:06.674 Initialization complete. Launching workers. 00:33:06.674 ======================================================== 00:33:06.674 Latency(us) 00:33:06.674 Device Information : IOPS MiB/s Average min max 00:33:06.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30125.00 14.71 4248.61 1568.35 8084.57 00:33:06.674 ======================================================== 00:33:06.674 Total : 30125.00 14.71 4248.61 1568.35 8084.57 00:33:06.674 00:33:06.674 true 00:33:06.674 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 368729 00:33:06.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (368729) - No such process 00:33:06.674 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 368729 00:33:06.674 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.933 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.192 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:07.192 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:07.192 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:07.192 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:07.192 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:07.450 null0 00:33:07.708 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:07.709 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:07.709 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:07.967 null1 00:33:07.967 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:07.967 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:07.967 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:08.226 null2 00:33:08.226 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:08.226 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:08.226 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:08.485 null3 00:33:08.485 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:08.485 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:08.485 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:08.744 null4 00:33:08.744 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:08.744 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:08.744 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:09.003 null5 00:33:09.003 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:09.003 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:09.003 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:09.261 null6 00:33:09.261 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:09.261 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:09.261 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:09.521 null7 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 374940 374942 374945 374948 374951 374954 374956 374960 00:33:09.521 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:09.522 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:09.522 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.522 12:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.781 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.040 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:10.299 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.299 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.299 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:10.299 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:10.300 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.559 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:10.818 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.077 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:11.335 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.593 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:11.852 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:12.111 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.112 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:12.371 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.630 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:12.631 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:12.631 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:12.631 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.631 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.631 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:12.890 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:13.149 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:13.409 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.668 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:13.927 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:14.186 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.446 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.446 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:14.706 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.966 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.225 rmmod nvme_tcp 00:33:15.225 rmmod nvme_fabrics 00:33:15.225 rmmod nvme_keyring 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 368215 ']' 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 368215 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 368215 ']' 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 368215 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 368215 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 368215' 00:33:15.225 killing process with pid 368215 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 368215 00:33:15.225 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 368215 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.484 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.390 00:33:17.390 real 0m49.864s 00:33:17.390 user 3m21.138s 00:33:17.390 sys 0m21.734s 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:17.390 ************************************ 00:33:17.390 END TEST nvmf_ns_hotplug_stress 00:33:17.390 ************************************ 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:17.390 12:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:17.390 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:17.650 ************************************ 00:33:17.650 START TEST nvmf_delete_subsystem 00:33:17.650 ************************************ 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:17.650 * Looking for test storage... 00:33:17.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:17.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.650 --rc genhtml_branch_coverage=1 00:33:17.650 --rc genhtml_function_coverage=1 00:33:17.650 --rc genhtml_legend=1 00:33:17.650 --rc geninfo_all_blocks=1 00:33:17.650 --rc geninfo_unexecuted_blocks=1 00:33:17.650 00:33:17.650 ' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:17.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.650 --rc genhtml_branch_coverage=1 00:33:17.650 --rc genhtml_function_coverage=1 00:33:17.650 --rc genhtml_legend=1 00:33:17.650 --rc geninfo_all_blocks=1 00:33:17.650 --rc geninfo_unexecuted_blocks=1 00:33:17.650 00:33:17.650 ' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:17.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.650 --rc genhtml_branch_coverage=1 00:33:17.650 --rc genhtml_function_coverage=1 00:33:17.650 --rc genhtml_legend=1 00:33:17.650 --rc geninfo_all_blocks=1 00:33:17.650 --rc geninfo_unexecuted_blocks=1 00:33:17.650 00:33:17.650 ' 00:33:17.650 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:17.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.650 --rc genhtml_branch_coverage=1 00:33:17.650 --rc genhtml_function_coverage=1 00:33:17.650 --rc genhtml_legend=1 00:33:17.650 --rc geninfo_all_blocks=1 00:33:17.650 --rc geninfo_unexecuted_blocks=1 00:33:17.650 00:33:17.650 ' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.651 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.911 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.911 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.911 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.911 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:23.180 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:23.180 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:23.180 Found net devices under 0000:af:00.0: cvl_0_0 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:23.180 Found net devices under 0000:af:00.1: cvl_0_1 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.180 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:33:23.181 00:33:23.181 --- 10.0.0.2 ping statistics --- 00:33:23.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.181 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:33:23.181 00:33:23.181 --- 10.0.0.1 ping statistics --- 00:33:23.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.181 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=379460 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 379460 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 379460 ']' 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:23.181 [2024-11-06 12:39:54.323891] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.181 [2024-11-06 12:39:54.325232] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:33:23.181 [2024-11-06 12:39:54.325276] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.181 [2024-11-06 12:39:54.428581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:23.181 [2024-11-06 12:39:54.477380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.181 [2024-11-06 12:39:54.477420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.181 [2024-11-06 12:39:54.477431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.181 [2024-11-06 12:39:54.477444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.181 [2024-11-06 12:39:54.477451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.181 [2024-11-06 12:39:54.478879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.181 [2024-11-06 12:39:54.478892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.181 [2024-11-06 12:39:54.554009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.181 [2024-11-06 12:39:54.554027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.181 [2024-11-06 12:39:54.554311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 [2024-11-06 12:39:54.611638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 [2024-11-06 12:39:54.631885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 NULL1 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 Delay0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.181 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=379687 00:33:23.182 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:23.182 12:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:23.182 [2024-11-06 12:39:54.710991] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:25.085 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.085 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.085 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 starting I/O failed: -6 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 starting I/O failed: -6 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 starting I/O failed: -6 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 Read completed with error (sct=0, sc=8) 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.344 starting I/O failed: -6 00:33:25.344 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 [2024-11-06 12:39:56.788917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8d4c00d680 is same with the state(6) to be set 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 starting I/O failed: -6 00:33:25.345 Write completed with error (sct=0, sc=8) 00:33:25.345 Read completed with error (sct=0, sc=8) 00:33:25.345 [2024-11-06 12:39:56.789556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d2c0 is same with the state(6) to be set 00:33:26.282 [2024-11-06 12:39:57.765136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135e5e0 is same with the state(6) to be set 00:33:26.282 Write completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Write completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Write completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Write completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 [2024-11-06 12:39:57.791313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8d4c00d350 is same with the state(6) to be set 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Read completed with error (sct=0, sc=8) 00:33:26.282 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 [2024-11-06 12:39:57.792213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d4a0 is same with the state(6) to be set 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 [2024-11-06 12:39:57.792330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135d0e0 is same with the state(6) to be set 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Write completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 Read completed with error (sct=0, sc=8) 00:33:26.283 [2024-11-06 12:39:57.792857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135cf00 is same with the state(6) to be set 00:33:26.283 Initializing NVMe Controllers 00:33:26.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.283 Controller IO queue size 128, less than required. 00:33:26.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:26.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:26.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:26.283 Initialization complete. Launching workers. 00:33:26.283 ======================================================== 00:33:26.283 Latency(us) 00:33:26.283 Device Information : IOPS MiB/s Average min max 00:33:26.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.14 0.09 1013668.40 397.40 2002328.96 00:33:26.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.32 0.07 905462.58 221.92 2001448.42 00:33:26.283 ======================================================== 00:33:26.283 Total : 330.47 0.16 963464.80 221.92 2002328.96 00:33:26.283 00:33:26.283 [2024-11-06 12:39:57.793435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135e5e0 (9): Bad file descriptor 00:33:26.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:26.283 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.283 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:26.283 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 379687 00:33:26.283 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 379687 00:33:26.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (379687) - No such process 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 379687 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 379687 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 379687 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:26.852 [2024-11-06 12:39:58.311561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=380227 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:26.852 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:26.852 [2024-11-06 12:39:58.377500] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:27.419 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:27.419 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:27.419 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:27.986 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:27.986 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:27.986 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:28.244 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:28.244 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:28.244 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:28.811 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:28.811 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:28.811 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:29.377 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:29.377 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:29.377 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:29.944 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:29.944 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:29.944 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:29.944 Initializing NVMe Controllers 00:33:29.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.944 Controller IO queue size 128, less than required. 00:33:29.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:29.944 Initialization complete. Launching workers. 00:33:29.944 ======================================================== 00:33:29.944 Latency(us) 00:33:29.944 Device Information : IOPS MiB/s Average min max 00:33:29.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005629.81 1000165.45 1042317.59 00:33:29.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003574.60 1000162.58 1012990.73 00:33:29.944 ======================================================== 00:33:29.944 Total : 256.00 0.12 1004602.20 1000162.58 1042317.59 00:33:29.944 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 380227 00:33:30.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (380227) - No such process 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 380227 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.509 rmmod nvme_tcp 00:33:30.509 rmmod nvme_fabrics 00:33:30.509 rmmod nvme_keyring 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:30.509 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 379460 ']' 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 379460 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 379460 ']' 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 379460 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 379460 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 379460' 00:33:30.510 killing process with pid 379460 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 379460 00:33:30.510 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 379460 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.768 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:32.697 00:33:32.697 real 0m15.189s 00:33:32.697 user 0m25.429s 00:33:32.697 sys 0m5.434s 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:32.697 ************************************ 00:33:32.697 END TEST nvmf_delete_subsystem 00:33:32.697 ************************************ 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:32.697 ************************************ 00:33:32.697 START TEST nvmf_host_management 00:33:32.697 ************************************ 00:33:32.697 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:32.957 * Looking for test storage... 00:33:32.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.957 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:32.957 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.958 --rc genhtml_branch_coverage=1 00:33:32.958 --rc genhtml_function_coverage=1 00:33:32.958 --rc genhtml_legend=1 00:33:32.958 --rc geninfo_all_blocks=1 00:33:32.958 --rc geninfo_unexecuted_blocks=1 00:33:32.958 00:33:32.958 ' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.958 --rc genhtml_branch_coverage=1 00:33:32.958 --rc genhtml_function_coverage=1 00:33:32.958 --rc genhtml_legend=1 00:33:32.958 --rc geninfo_all_blocks=1 00:33:32.958 --rc geninfo_unexecuted_blocks=1 00:33:32.958 00:33:32.958 ' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.958 --rc genhtml_branch_coverage=1 00:33:32.958 --rc genhtml_function_coverage=1 00:33:32.958 --rc genhtml_legend=1 00:33:32.958 --rc geninfo_all_blocks=1 00:33:32.958 --rc geninfo_unexecuted_blocks=1 00:33:32.958 00:33:32.958 ' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.958 --rc genhtml_branch_coverage=1 00:33:32.958 --rc genhtml_function_coverage=1 00:33:32.958 --rc genhtml_legend=1 00:33:32.958 --rc geninfo_all_blocks=1 00:33:32.958 --rc geninfo_unexecuted_blocks=1 00:33:32.958 00:33:32.958 ' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.958 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.959 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:38.235 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:38.235 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:38.235 Found net devices under 0000:af:00.0: cvl_0_0 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.235 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:38.235 Found net devices under 0000:af:00.1: cvl_0_1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.236 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:33:38.496 00:33:38.496 --- 10.0.0.2 ping statistics --- 00:33:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.496 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:33:38.496 00:33:38.496 --- 10.0.0.1 ping statistics --- 00:33:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.496 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=384476 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 384476 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 384476 ']' 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:38.496 12:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.496 [2024-11-06 12:40:09.983435] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:38.496 [2024-11-06 12:40:09.984770] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:33:38.496 [2024-11-06 12:40:09.984813] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.496 [2024-11-06 12:40:10.065272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.496 [2024-11-06 12:40:10.112324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.496 [2024-11-06 12:40:10.112358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.496 [2024-11-06 12:40:10.112365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.496 [2024-11-06 12:40:10.112370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.496 [2024-11-06 12:40:10.112375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.754 [2024-11-06 12:40:10.113848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.754 [2024-11-06 12:40:10.113865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.754 [2024-11-06 12:40:10.113883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:38.754 [2024-11-06 12:40:10.113885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.754 [2024-11-06 12:40:10.179335] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:38.754 [2024-11-06 12:40:10.179380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:38.754 [2024-11-06 12:40:10.179482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:38.754 [2024-11-06 12:40:10.179763] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:38.754 [2024-11-06 12:40:10.179927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.754 [2024-11-06 12:40:10.258601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.754 Malloc0 00:33:38.754 [2024-11-06 12:40:10.322530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:38.754 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.755 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=384517 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 384517 /var/tmp/bdevperf.sock 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 384517 ']' 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:39.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.013 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.013 { 00:33:39.013 "params": { 00:33:39.013 "name": "Nvme$subsystem", 00:33:39.013 "trtype": "$TEST_TRANSPORT", 00:33:39.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.013 "adrfam": "ipv4", 00:33:39.013 "trsvcid": "$NVMF_PORT", 00:33:39.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.014 "hdgst": ${hdgst:-false}, 00:33:39.014 "ddgst": ${ddgst:-false} 00:33:39.014 }, 00:33:39.014 "method": "bdev_nvme_attach_controller" 00:33:39.014 } 00:33:39.014 EOF 00:33:39.014 )") 00:33:39.014 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:39.014 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:39.014 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:39.014 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.014 "params": { 00:33:39.014 "name": "Nvme0", 00:33:39.014 "trtype": "tcp", 00:33:39.014 "traddr": "10.0.0.2", 00:33:39.014 "adrfam": "ipv4", 00:33:39.014 "trsvcid": "4420", 00:33:39.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.014 "hdgst": false, 00:33:39.014 "ddgst": false 00:33:39.014 }, 00:33:39.014 "method": "bdev_nvme_attach_controller" 00:33:39.014 }' 00:33:39.014 [2024-11-06 12:40:10.424255] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:33:39.014 [2024-11-06 12:40:10.424315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384517 ] 00:33:39.014 [2024-11-06 12:40:10.517746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.014 [2024-11-06 12:40:10.566272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.273 Running I/O for 10 seconds... 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=89 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 89 -ge 100 ']' 00:33:39.273 12:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.532 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.793 [2024-11-06 12:40:11.150359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255ff20 is same with the state(6) to be set 00:33:39.793 [2024-11-06 12:40:11.150394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255ff20 is same with the state(6) to be set 00:33:39.793 [2024-11-06 12:40:11.150402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255ff20 is same with the state(6) to be set 00:33:39.793 [2024-11-06 12:40:11.150408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255ff20 is same with the state(6) to be set 00:33:39.793 [2024-11-06 12:40:11.153022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.793 [2024-11-06 12:40:11.153062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.793 [2024-11-06 12:40:11.153089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.793 [2024-11-06 12:40:11.153112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.793 [2024-11-06 12:40:11.153133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8a40 is same with the state(6) to be set 00:33:39.793 [2024-11-06 12:40:11.153192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.793 [2024-11-06 12:40:11.153585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.793 [2024-11-06 12:40:11.153597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.153985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.153997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.794 [2024-11-06 12:40:11.154399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.794 [2024-11-06 12:40:11.154408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 [2024-11-06 12:40:11.154630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.795 [2024-11-06 12:40:11.154640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.795 [2024-11-06 12:40:11.156040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.795 task offset: 81920 on job bdev=Nvme0n1 fails 00:33:39.795 00:33:39.795 Latency(us) 00:33:39.795 [2024-11-06T11:40:11.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.795 Job: Nvme0n1 ended in about 0.42 seconds with error 00:33:39.795 Verification LBA range: start 0x0 length 0x400 00:33:39.795 Nvme0n1 : 0.42 1509.46 94.34 150.95 0.00 37121.52 2353.34 34078.72 00:33:39.795 [2024-11-06T11:40:11.410Z] =================================================================================================================== 00:33:39.795 [2024-11-06T11:40:11.410Z] Total : 1509.46 94.34 150.95 0.00 37121.52 2353.34 34078.72 00:33:39.795 [2024-11-06 12:40:11.159178] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:39.795 [2024-11-06 12:40:11.159206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc8a40 (9): Bad file descriptor 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.795 12:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:39.795 [2024-11-06 12:40:11.202777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 384517 00:33:40.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (384517) - No such process 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.732 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.732 { 00:33:40.732 "params": { 00:33:40.732 "name": "Nvme$subsystem", 00:33:40.732 "trtype": "$TEST_TRANSPORT", 00:33:40.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.732 "adrfam": "ipv4", 00:33:40.732 "trsvcid": "$NVMF_PORT", 00:33:40.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.732 "hdgst": ${hdgst:-false}, 00:33:40.732 "ddgst": ${ddgst:-false} 00:33:40.732 }, 00:33:40.733 "method": "bdev_nvme_attach_controller" 00:33:40.733 } 00:33:40.733 EOF 00:33:40.733 )") 00:33:40.733 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:40.733 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:40.733 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:40.733 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.733 "params": { 00:33:40.733 "name": "Nvme0", 00:33:40.733 "trtype": "tcp", 00:33:40.733 "traddr": "10.0.0.2", 00:33:40.733 "adrfam": "ipv4", 00:33:40.733 "trsvcid": "4420", 00:33:40.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.733 "hdgst": false, 00:33:40.733 "ddgst": false 00:33:40.733 }, 00:33:40.733 "method": "bdev_nvme_attach_controller" 00:33:40.733 }' 00:33:40.733 [2024-11-06 12:40:12.224212] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:33:40.733 [2024-11-06 12:40:12.224275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384804 ] 00:33:40.733 [2024-11-06 12:40:12.319406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.991 [2024-11-06 12:40:12.366309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.250 Running I/O for 1 seconds... 00:33:42.187 1600.00 IOPS, 100.00 MiB/s 00:33:42.187 Latency(us) 00:33:42.187 [2024-11-06T11:40:13.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:42.187 Verification LBA range: start 0x0 length 0x400 00:33:42.187 Nvme0n1 : 1.02 1633.76 102.11 0.00 0.00 38266.87 2666.12 34078.72 00:33:42.187 [2024-11-06T11:40:13.802Z] =================================================================================================================== 00:33:42.187 [2024-11-06T11:40:13.802Z] Total : 1633.76 102.11 0.00 0.00 38266.87 2666.12 34078.72 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.446 rmmod nvme_tcp 00:33:42.446 rmmod nvme_fabrics 00:33:42.446 rmmod nvme_keyring 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 384476 ']' 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 384476 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 384476 ']' 00:33:42.446 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 384476 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 384476 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 384476' 00:33:42.447 killing process with pid 384476 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 384476 00:33:42.447 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 384476 00:33:42.705 [2024-11-06 12:40:14.111232] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:42.705 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:42.706 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.706 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.706 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.706 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.706 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.609 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.609 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:44.609 00:33:44.609 real 0m11.920s 00:33:44.609 user 0m18.323s 00:33:44.609 sys 0m5.840s 00:33:44.609 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:44.609 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:44.609 ************************************ 00:33:44.609 END TEST nvmf_host_management 00:33:44.609 ************************************ 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.868 ************************************ 00:33:44.868 START TEST nvmf_lvol 00:33:44.868 ************************************ 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:44.868 * Looking for test storage... 00:33:44.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:44.868 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.869 --rc genhtml_branch_coverage=1 00:33:44.869 --rc genhtml_function_coverage=1 00:33:44.869 --rc genhtml_legend=1 00:33:44.869 --rc geninfo_all_blocks=1 00:33:44.869 --rc geninfo_unexecuted_blocks=1 00:33:44.869 00:33:44.869 ' 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.869 --rc genhtml_branch_coverage=1 00:33:44.869 --rc genhtml_function_coverage=1 00:33:44.869 --rc genhtml_legend=1 00:33:44.869 --rc geninfo_all_blocks=1 00:33:44.869 --rc geninfo_unexecuted_blocks=1 00:33:44.869 00:33:44.869 ' 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.869 --rc genhtml_branch_coverage=1 00:33:44.869 --rc genhtml_function_coverage=1 00:33:44.869 --rc genhtml_legend=1 00:33:44.869 --rc geninfo_all_blocks=1 00:33:44.869 --rc geninfo_unexecuted_blocks=1 00:33:44.869 00:33:44.869 ' 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.869 --rc genhtml_branch_coverage=1 00:33:44.869 --rc genhtml_function_coverage=1 00:33:44.869 --rc genhtml_legend=1 00:33:44.869 --rc geninfo_all_blocks=1 00:33:44.869 --rc geninfo_unexecuted_blocks=1 00:33:44.869 00:33:44.869 ' 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.869 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.129 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.399 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:50.400 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:50.400 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:50.400 Found net devices under 0000:af:00.0: cvl_0_0 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:50.400 Found net devices under 0000:af:00.1: cvl_0_1 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.400 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.400 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.400 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.400 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:33:50.659 00:33:50.659 --- 10.0.0.2 ping statistics --- 00:33:50.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.659 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:50.659 00:33:50.659 --- 10.0.0.1 ping statistics --- 00:33:50.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.659 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:50.659 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=388788 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 388788 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 388788 ']' 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:50.919 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:50.919 [2024-11-06 12:40:22.327157] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:50.919 [2024-11-06 12:40:22.328017] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:33:50.919 [2024-11-06 12:40:22.328048] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.919 [2024-11-06 12:40:22.423897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:50.919 [2024-11-06 12:40:22.475686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.919 [2024-11-06 12:40:22.475728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.919 [2024-11-06 12:40:22.475739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.919 [2024-11-06 12:40:22.475748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.919 [2024-11-06 12:40:22.475755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.919 [2024-11-06 12:40:22.477356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.919 [2024-11-06 12:40:22.477471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:50.919 [2024-11-06 12:40:22.477477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.177 [2024-11-06 12:40:22.552297] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:51.177 [2024-11-06 12:40:22.552391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:51.178 [2024-11-06 12:40:22.552443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:51.178 [2024-11-06 12:40:22.552668] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.744 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:52.003 [2024-11-06 12:40:23.422194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.003 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:52.262 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:52.262 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:52.520 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:52.520 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:52.779 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:53.038 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c1fa1b52-f4b6-4ea9-a382-82b10e9befc8 00:33:53.038 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1fa1b52-f4b6-4ea9-a382-82b10e9befc8 lvol 20 00:33:53.295 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9a4e599a-256b-4dbb-8848-fcb9bc25dbe9 00:33:53.296 12:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:53.863 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a4e599a-256b-4dbb-8848-fcb9bc25dbe9 00:33:53.863 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.121 [2024-11-06 12:40:25.686209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.121 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.380 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=389351 00:33:54.380 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:54.380 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:55.316 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9a4e599a-256b-4dbb-8848-fcb9bc25dbe9 MY_SNAPSHOT 00:33:55.883 12:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c537f556-e139-4534-9f53-495499aa5ce5 00:33:55.883 12:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9a4e599a-256b-4dbb-8848-fcb9bc25dbe9 30 00:33:56.142 12:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c537f556-e139-4534-9f53-495499aa5ce5 MY_CLONE 00:33:56.401 12:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eae855bd-6b08-4ed8-be88-eb38ad7c53cd 00:33:56.401 12:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eae855bd-6b08-4ed8-be88-eb38ad7c53cd 00:33:56.968 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 389351 00:34:05.085 Initializing NVMe Controllers 00:34:05.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:05.085 Controller IO queue size 128, less than required. 00:34:05.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:05.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:05.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:05.085 Initialization complete. Launching workers. 00:34:05.085 ======================================================== 00:34:05.085 Latency(us) 00:34:05.085 Device Information : IOPS MiB/s Average min max 00:34:05.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13408.60 52.38 9546.80 957.42 53311.03 00:34:05.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8535.90 33.34 15009.34 3276.86 56160.16 00:34:05.085 ======================================================== 00:34:05.085 Total : 21944.50 85.72 11671.60 957.42 56160.16 00:34:05.085 00:34:05.085 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:05.085 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a4e599a-256b-4dbb-8848-fcb9bc25dbe9 00:34:05.343 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1fa1b52-f4b6-4ea9-a382-82b10e9befc8 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.602 rmmod nvme_tcp 00:34:05.602 rmmod nvme_fabrics 00:34:05.602 rmmod nvme_keyring 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 388788 ']' 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 388788 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 388788 ']' 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 388788 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 388788 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 388788' 00:34:05.602 killing process with pid 388788 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 388788 00:34:05.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 388788 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.861 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.395 00:34:08.395 real 0m23.214s 00:34:08.395 user 0m57.763s 00:34:08.395 sys 0m9.830s 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:08.395 ************************************ 00:34:08.395 END TEST nvmf_lvol 00:34:08.395 ************************************ 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:08.395 ************************************ 00:34:08.395 START TEST nvmf_lvs_grow 00:34:08.395 ************************************ 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:08.395 * Looking for test storage... 00:34:08.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:08.395 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:08.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.396 --rc genhtml_branch_coverage=1 00:34:08.396 --rc genhtml_function_coverage=1 00:34:08.396 --rc genhtml_legend=1 00:34:08.396 --rc geninfo_all_blocks=1 00:34:08.396 --rc geninfo_unexecuted_blocks=1 00:34:08.396 00:34:08.396 ' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:08.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.396 --rc genhtml_branch_coverage=1 00:34:08.396 --rc genhtml_function_coverage=1 00:34:08.396 --rc genhtml_legend=1 00:34:08.396 --rc geninfo_all_blocks=1 00:34:08.396 --rc geninfo_unexecuted_blocks=1 00:34:08.396 00:34:08.396 ' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:08.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.396 --rc genhtml_branch_coverage=1 00:34:08.396 --rc genhtml_function_coverage=1 00:34:08.396 --rc genhtml_legend=1 00:34:08.396 --rc geninfo_all_blocks=1 00:34:08.396 --rc geninfo_unexecuted_blocks=1 00:34:08.396 00:34:08.396 ' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:08.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.396 --rc genhtml_branch_coverage=1 00:34:08.396 --rc genhtml_function_coverage=1 00:34:08.396 --rc genhtml_legend=1 00:34:08.396 --rc geninfo_all_blocks=1 00:34:08.396 --rc geninfo_unexecuted_blocks=1 00:34:08.396 00:34:08.396 ' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.396 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.397 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.397 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.397 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:13.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:13.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.670 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:13.670 Found net devices under 0000:af:00.0: cvl_0_0 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:13.671 Found net devices under 0000:af:00.1: cvl_0_1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:13.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:34:13.671 00:34:13.671 --- 10.0.0.2 ping statistics --- 00:34:13.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.671 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:34:13.671 00:34:13.671 --- 10.0.0.1 ping statistics --- 00:34:13.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.671 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=394861 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 394861 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 394861 ']' 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.671 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:13.671 [2024-11-06 12:40:44.791350] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:13.671 [2024-11-06 12:40:44.792693] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:13.671 [2024-11-06 12:40:44.792736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.671 [2024-11-06 12:40:44.892436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.671 [2024-11-06 12:40:44.941085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.671 [2024-11-06 12:40:44.941126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.671 [2024-11-06 12:40:44.941136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.671 [2024-11-06 12:40:44.941145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.671 [2024-11-06 12:40:44.941152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.671 [2024-11-06 12:40:44.941815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.671 [2024-11-06 12:40:45.016414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:13.671 [2024-11-06 12:40:45.016726] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.671 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:13.941 [2024-11-06 12:40:45.338579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:13.941 ************************************ 00:34:13.941 START TEST lvs_grow_clean 00:34:13.941 ************************************ 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:13.941 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:14.211 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:14.211 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:14.500 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:14.500 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:14.500 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:14.770 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:14.771 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:14.771 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 lvol 150 00:34:15.071 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=87ae4767-85a8-48cf-ae1d-f1890b6b019c 00:34:15.071 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:15.071 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:15.352 [2024-11-06 12:40:46.782255] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:15.352 [2024-11-06 12:40:46.782371] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:15.352 true 00:34:15.352 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:15.352 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:15.646 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:15.646 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:15.905 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 87ae4767-85a8-48cf-ae1d-f1890b6b019c 00:34:15.905 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.164 [2024-11-06 12:40:47.658786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.164 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=395500 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 395500 /var/tmp/bdevperf.sock 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 395500 ']' 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:16.423 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:16.423 [2024-11-06 12:40:47.973678] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:16.423 [2024-11-06 12:40:47.973723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395500 ] 00:34:16.423 [2024-11-06 12:40:48.026841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.682 [2024-11-06 12:40:48.064762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.682 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:16.682 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:34:16.682 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:17.249 Nvme0n1 00:34:17.249 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:17.249 [ 00:34:17.249 { 00:34:17.249 "name": "Nvme0n1", 00:34:17.249 "aliases": [ 00:34:17.249 "87ae4767-85a8-48cf-ae1d-f1890b6b019c" 00:34:17.249 ], 00:34:17.249 "product_name": "NVMe disk", 00:34:17.249 "block_size": 4096, 00:34:17.249 "num_blocks": 38912, 00:34:17.249 "uuid": "87ae4767-85a8-48cf-ae1d-f1890b6b019c", 00:34:17.249 "numa_id": 1, 00:34:17.249 "assigned_rate_limits": { 00:34:17.249 "rw_ios_per_sec": 0, 00:34:17.249 "rw_mbytes_per_sec": 0, 00:34:17.249 "r_mbytes_per_sec": 0, 00:34:17.249 "w_mbytes_per_sec": 0 00:34:17.249 }, 00:34:17.249 "claimed": false, 00:34:17.249 "zoned": false, 00:34:17.249 "supported_io_types": { 00:34:17.249 "read": true, 00:34:17.249 "write": true, 00:34:17.249 "unmap": true, 00:34:17.249 "flush": true, 00:34:17.249 "reset": true, 00:34:17.249 "nvme_admin": true, 00:34:17.249 "nvme_io": true, 00:34:17.249 "nvme_io_md": false, 00:34:17.249 "write_zeroes": true, 00:34:17.249 "zcopy": false, 00:34:17.249 "get_zone_info": false, 00:34:17.249 "zone_management": false, 00:34:17.249 "zone_append": false, 00:34:17.249 "compare": true, 00:34:17.249 "compare_and_write": true, 00:34:17.249 "abort": true, 00:34:17.249 "seek_hole": false, 00:34:17.249 "seek_data": false, 00:34:17.249 "copy": true, 00:34:17.249 "nvme_iov_md": false 00:34:17.249 }, 00:34:17.249 "memory_domains": [ 00:34:17.249 { 00:34:17.249 "dma_device_id": "system", 00:34:17.249 "dma_device_type": 1 00:34:17.249 } 00:34:17.249 ], 00:34:17.249 "driver_specific": { 00:34:17.249 "nvme": [ 00:34:17.249 { 00:34:17.249 "trid": { 00:34:17.249 "trtype": "TCP", 00:34:17.249 "adrfam": "IPv4", 00:34:17.249 "traddr": "10.0.0.2", 00:34:17.249 "trsvcid": "4420", 00:34:17.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:17.249 }, 00:34:17.249 "ctrlr_data": { 00:34:17.249 "cntlid": 1, 00:34:17.249 "vendor_id": "0x8086", 00:34:17.249 "model_number": "SPDK bdev Controller", 00:34:17.249 "serial_number": "SPDK0", 00:34:17.249 "firmware_revision": "25.01", 00:34:17.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.249 "oacs": { 00:34:17.249 "security": 0, 00:34:17.249 "format": 0, 00:34:17.249 "firmware": 0, 00:34:17.249 "ns_manage": 0 00:34:17.249 }, 00:34:17.249 "multi_ctrlr": true, 00:34:17.249 "ana_reporting": false 00:34:17.249 }, 00:34:17.249 "vs": { 00:34:17.249 "nvme_version": "1.3" 00:34:17.249 }, 00:34:17.249 "ns_data": { 00:34:17.249 "id": 1, 00:34:17.249 "can_share": true 00:34:17.249 } 00:34:17.249 } 00:34:17.249 ], 00:34:17.249 "mp_policy": "active_passive" 00:34:17.249 } 00:34:17.249 } 00:34:17.249 ] 00:34:17.249 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=395512 00:34:17.249 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:17.249 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:17.508 Running I/O for 10 seconds... 00:34:18.443 Latency(us) 00:34:18.443 [2024-11-06T11:40:50.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.443 Nvme0n1 : 1.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:34:18.443 [2024-11-06T11:40:50.058Z] =================================================================================================================== 00:34:18.443 [2024-11-06T11:40:50.058Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:34:18.443 00:34:19.376 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:19.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.376 Nvme0n1 : 2.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:34:19.376 [2024-11-06T11:40:50.991Z] =================================================================================================================== 00:34:19.376 [2024-11-06T11:40:50.991Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:34:19.376 00:34:19.634 true 00:34:19.634 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:19.634 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:19.892 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:19.892 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:19.892 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 395512 00:34:20.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.458 Nvme0n1 : 3.00 14801.33 57.82 0.00 0.00 0.00 0.00 0.00 00:34:20.458 [2024-11-06T11:40:52.073Z] =================================================================================================================== 00:34:20.458 [2024-11-06T11:40:52.073Z] Total : 14801.33 57.82 0.00 0.00 0.00 0.00 0.00 00:34:20.458 00:34:21.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.392 Nvme0n1 : 4.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:34:21.392 [2024-11-06T11:40:53.007Z] =================================================================================================================== 00:34:21.392 [2024-11-06T11:40:53.007Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:34:21.392 00:34:22.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.325 Nvme0n1 : 5.00 14884.40 58.14 0.00 0.00 0.00 0.00 0.00 00:34:22.325 [2024-11-06T11:40:53.940Z] =================================================================================================================== 00:34:22.325 [2024-11-06T11:40:53.940Z] Total : 14884.40 58.14 0.00 0.00 0.00 0.00 0.00 00:34:22.325 00:34:23.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.699 Nvme0n1 : 6.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:34:23.699 [2024-11-06T11:40:55.314Z] =================================================================================================================== 00:34:23.699 [2024-11-06T11:40:55.314Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:34:23.699 00:34:24.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.633 Nvme0n1 : 7.00 14949.71 58.40 0.00 0.00 0.00 0.00 0.00 00:34:24.633 [2024-11-06T11:40:56.248Z] =================================================================================================================== 00:34:24.633 [2024-11-06T11:40:56.248Z] Total : 14949.71 58.40 0.00 0.00 0.00 0.00 0.00 00:34:24.633 00:34:25.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.567 Nvme0n1 : 8.00 14970.12 58.48 0.00 0.00 0.00 0.00 0.00 00:34:25.567 [2024-11-06T11:40:57.182Z] =================================================================================================================== 00:34:25.567 [2024-11-06T11:40:57.182Z] Total : 14970.12 58.48 0.00 0.00 0.00 0.00 0.00 00:34:25.567 00:34:26.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:26.502 Nvme0n1 : 9.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:34:26.502 [2024-11-06T11:40:58.117Z] =================================================================================================================== 00:34:26.502 [2024-11-06T11:40:58.117Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:34:26.502 00:34:27.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.439 Nvme0n1 : 10.00 14998.70 58.59 0.00 0.00 0.00 0.00 0.00 00:34:27.439 [2024-11-06T11:40:59.054Z] =================================================================================================================== 00:34:27.439 [2024-11-06T11:40:59.054Z] Total : 14998.70 58.59 0.00 0.00 0.00 0.00 0.00 00:34:27.439 00:34:27.439 00:34:27.439 Latency(us) 00:34:27.439 [2024-11-06T11:40:59.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.439 Nvme0n1 : 10.01 14999.80 58.59 0.00 0.00 8529.70 7089.80 26810.18 00:34:27.439 [2024-11-06T11:40:59.054Z] =================================================================================================================== 00:34:27.439 [2024-11-06T11:40:59.054Z] Total : 14999.80 58.59 0.00 0.00 8529.70 7089.80 26810.18 00:34:27.439 { 00:34:27.439 "results": [ 00:34:27.439 { 00:34:27.439 "job": "Nvme0n1", 00:34:27.439 "core_mask": "0x2", 00:34:27.439 "workload": "randwrite", 00:34:27.439 "status": "finished", 00:34:27.439 "queue_depth": 128, 00:34:27.439 "io_size": 4096, 00:34:27.439 "runtime": 10.007803, 00:34:27.439 "iops": 14999.795659446934, 00:34:27.439 "mibps": 58.59295179471459, 00:34:27.439 "io_failed": 0, 00:34:27.439 "io_timeout": 0, 00:34:27.439 "avg_latency_us": 8529.700616388041, 00:34:27.439 "min_latency_us": 7089.8036363636365, 00:34:27.439 "max_latency_us": 26810.18181818182 00:34:27.439 } 00:34:27.439 ], 00:34:27.439 "core_count": 1 00:34:27.439 } 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 395500 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 395500 ']' 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 395500 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 395500 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 395500' 00:34:27.439 killing process with pid 395500 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 395500 00:34:27.439 Received shutdown signal, test time was about 10.000000 seconds 00:34:27.439 00:34:27.439 Latency(us) 00:34:27.439 [2024-11-06T11:40:59.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.439 [2024-11-06T11:40:59.054Z] =================================================================================================================== 00:34:27.439 [2024-11-06T11:40:59.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:27.439 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 395500 00:34:27.698 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:27.698 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.956 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:27.956 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:28.215 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:28.215 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:28.215 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:28.474 [2024-11-06 12:40:59.906330] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:28.474 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:28.734 request: 00:34:28.734 { 00:34:28.734 "uuid": "1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1", 00:34:28.734 "method": "bdev_lvol_get_lvstores", 00:34:28.734 "req_id": 1 00:34:28.734 } 00:34:28.734 Got JSON-RPC error response 00:34:28.735 response: 00:34:28.735 { 00:34:28.735 "code": -19, 00:34:28.735 "message": "No such device" 00:34:28.735 } 00:34:28.735 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:34:28.735 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:28.735 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:28.735 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:28.735 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:28.994 aio_bdev 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 87ae4767-85a8-48cf-ae1d-f1890b6b019c 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=87ae4767-85a8-48cf-ae1d-f1890b6b019c 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:28.994 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87ae4767-85a8-48cf-ae1d-f1890b6b019c -t 2000 00:34:29.252 [ 00:34:29.252 { 00:34:29.252 "name": "87ae4767-85a8-48cf-ae1d-f1890b6b019c", 00:34:29.252 "aliases": [ 00:34:29.252 "lvs/lvol" 00:34:29.252 ], 00:34:29.252 "product_name": "Logical Volume", 00:34:29.252 "block_size": 4096, 00:34:29.252 "num_blocks": 38912, 00:34:29.252 "uuid": "87ae4767-85a8-48cf-ae1d-f1890b6b019c", 00:34:29.252 "assigned_rate_limits": { 00:34:29.252 "rw_ios_per_sec": 0, 00:34:29.252 "rw_mbytes_per_sec": 0, 00:34:29.252 "r_mbytes_per_sec": 0, 00:34:29.252 "w_mbytes_per_sec": 0 00:34:29.252 }, 00:34:29.252 "claimed": false, 00:34:29.252 "zoned": false, 00:34:29.252 "supported_io_types": { 00:34:29.252 "read": true, 00:34:29.252 "write": true, 00:34:29.252 "unmap": true, 00:34:29.252 "flush": false, 00:34:29.252 "reset": true, 00:34:29.252 "nvme_admin": false, 00:34:29.252 "nvme_io": false, 00:34:29.252 "nvme_io_md": false, 00:34:29.252 "write_zeroes": true, 00:34:29.252 "zcopy": false, 00:34:29.252 "get_zone_info": false, 00:34:29.252 "zone_management": false, 00:34:29.252 "zone_append": false, 00:34:29.252 "compare": false, 00:34:29.252 "compare_and_write": false, 00:34:29.252 "abort": false, 00:34:29.252 "seek_hole": true, 00:34:29.252 "seek_data": true, 00:34:29.252 "copy": false, 00:34:29.252 "nvme_iov_md": false 00:34:29.252 }, 00:34:29.252 "driver_specific": { 00:34:29.252 "lvol": { 00:34:29.252 "lvol_store_uuid": "1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1", 00:34:29.252 "base_bdev": "aio_bdev", 00:34:29.252 "thin_provision": false, 00:34:29.252 "num_allocated_clusters": 38, 00:34:29.253 "snapshot": false, 00:34:29.253 "clone": false, 00:34:29.253 "esnap_clone": false 00:34:29.253 } 00:34:29.253 } 00:34:29.253 } 00:34:29.253 ] 00:34:29.253 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:34:29.253 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:29.253 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:29.511 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:29.511 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:29.511 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:29.770 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:29.770 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87ae4767-85a8-48cf-ae1d-f1890b6b019c 00:34:30.028 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1747f3e0-f6f0-4bcd-a219-67c4d07e1ef1 00:34:30.286 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:30.544 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:30.544 00:34:30.544 real 0m16.616s 00:34:30.544 user 0m16.437s 00:34:30.544 sys 0m1.541s 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.544 ************************************ 00:34:30.544 END TEST lvs_grow_clean 00:34:30.544 ************************************ 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:30.544 ************************************ 00:34:30.544 START TEST lvs_grow_dirty 00:34:30.544 ************************************ 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:30.544 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:30.803 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:30.803 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:31.062 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=268f35d3-75c3-42a2-816e-0da0462dea46 00:34:31.062 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:31.062 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:31.320 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:31.320 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:31.320 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 268f35d3-75c3-42a2-816e-0da0462dea46 lvol 150 00:34:31.579 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:31.579 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:31.579 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:31.837 [2024-11-06 12:41:03.214269] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:31.837 [2024-11-06 12:41:03.214409] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:31.837 true 00:34:31.837 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:31.837 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:32.096 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:32.096 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:32.354 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:32.354 12:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.612 [2024-11-06 12:41:04.182755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.612 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=398290 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 398290 /var/tmp/bdevperf.sock 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 398290 ']' 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:32.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:32.871 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:32.871 [2024-11-06 12:41:04.390588] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:32.871 [2024-11-06 12:41:04.390632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398290 ] 00:34:32.871 [2024-11-06 12:41:04.444764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.871 [2024-11-06 12:41:04.484601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.129 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:33.129 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:34:33.129 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:33.387 Nvme0n1 00:34:33.387 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:33.645 [ 00:34:33.645 { 00:34:33.645 "name": "Nvme0n1", 00:34:33.645 "aliases": [ 00:34:33.645 "b172a32f-fba2-433f-b98f-4c127fd1c170" 00:34:33.645 ], 00:34:33.645 "product_name": "NVMe disk", 00:34:33.645 "block_size": 4096, 00:34:33.645 "num_blocks": 38912, 00:34:33.645 "uuid": "b172a32f-fba2-433f-b98f-4c127fd1c170", 00:34:33.645 "numa_id": 1, 00:34:33.645 "assigned_rate_limits": { 00:34:33.645 "rw_ios_per_sec": 0, 00:34:33.645 "rw_mbytes_per_sec": 0, 00:34:33.645 "r_mbytes_per_sec": 0, 00:34:33.645 "w_mbytes_per_sec": 0 00:34:33.645 }, 00:34:33.645 "claimed": false, 00:34:33.645 "zoned": false, 00:34:33.645 "supported_io_types": { 00:34:33.645 "read": true, 00:34:33.645 "write": true, 00:34:33.645 "unmap": true, 00:34:33.645 "flush": true, 00:34:33.645 "reset": true, 00:34:33.645 "nvme_admin": true, 00:34:33.645 "nvme_io": true, 00:34:33.645 "nvme_io_md": false, 00:34:33.645 "write_zeroes": true, 00:34:33.645 "zcopy": false, 00:34:33.645 "get_zone_info": false, 00:34:33.645 "zone_management": false, 00:34:33.645 "zone_append": false, 00:34:33.645 "compare": true, 00:34:33.645 "compare_and_write": true, 00:34:33.645 "abort": true, 00:34:33.645 "seek_hole": false, 00:34:33.645 "seek_data": false, 00:34:33.645 "copy": true, 00:34:33.645 "nvme_iov_md": false 00:34:33.645 }, 00:34:33.645 "memory_domains": [ 00:34:33.645 { 00:34:33.645 "dma_device_id": "system", 00:34:33.645 "dma_device_type": 1 00:34:33.645 } 00:34:33.645 ], 00:34:33.645 "driver_specific": { 00:34:33.645 "nvme": [ 00:34:33.645 { 00:34:33.645 "trid": { 00:34:33.645 "trtype": "TCP", 00:34:33.645 "adrfam": "IPv4", 00:34:33.645 "traddr": "10.0.0.2", 00:34:33.645 "trsvcid": "4420", 00:34:33.645 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:33.645 }, 00:34:33.645 "ctrlr_data": { 00:34:33.645 "cntlid": 1, 00:34:33.645 "vendor_id": "0x8086", 00:34:33.645 "model_number": "SPDK bdev Controller", 00:34:33.645 "serial_number": "SPDK0", 00:34:33.645 "firmware_revision": "25.01", 00:34:33.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.645 "oacs": { 00:34:33.645 "security": 0, 00:34:33.645 "format": 0, 00:34:33.645 "firmware": 0, 00:34:33.645 "ns_manage": 0 00:34:33.645 }, 00:34:33.645 "multi_ctrlr": true, 00:34:33.645 "ana_reporting": false 00:34:33.645 }, 00:34:33.645 "vs": { 00:34:33.645 "nvme_version": "1.3" 00:34:33.645 }, 00:34:33.645 "ns_data": { 00:34:33.645 "id": 1, 00:34:33.645 "can_share": true 00:34:33.645 } 00:34:33.645 } 00:34:33.645 ], 00:34:33.645 "mp_policy": "active_passive" 00:34:33.645 } 00:34:33.645 } 00:34:33.645 ] 00:34:33.645 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=398445 00:34:33.645 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:33.645 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:33.645 Running I/O for 10 seconds... 00:34:34.580 Latency(us) 00:34:34.580 [2024-11-06T11:41:06.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:34.580 Nvme0n1 : 1.00 14622.00 57.12 0.00 0.00 0.00 0.00 0.00 00:34:34.580 [2024-11-06T11:41:06.195Z] =================================================================================================================== 00:34:34.580 [2024-11-06T11:41:06.195Z] Total : 14622.00 57.12 0.00 0.00 0.00 0.00 0.00 00:34:34.580 00:34:35.514 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:35.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:35.772 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:34:35.772 [2024-11-06T11:41:07.387Z] =================================================================================================================== 00:34:35.772 [2024-11-06T11:41:07.387Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:34:35.772 00:34:35.772 true 00:34:36.031 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:36.031 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:36.289 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:36.289 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:36.289 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 398445 00:34:36.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:36.855 Nvme0n1 : 3.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:34:36.855 [2024-11-06T11:41:08.470Z] =================================================================================================================== 00:34:36.855 [2024-11-06T11:41:08.470Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:34:36.855 00:34:37.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:37.790 Nvme0n1 : 4.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:34:37.790 [2024-11-06T11:41:09.405Z] =================================================================================================================== 00:34:37.790 [2024-11-06T11:41:09.405Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:34:37.790 00:34:38.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.725 Nvme0n1 : 5.00 14935.20 58.34 0.00 0.00 0.00 0.00 0.00 00:34:38.725 [2024-11-06T11:41:10.340Z] =================================================================================================================== 00:34:38.725 [2024-11-06T11:41:10.340Z] Total : 14935.20 58.34 0.00 0.00 0.00 0.00 0.00 00:34:38.725 00:34:39.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:39.659 Nvme0n1 : 6.00 14964.83 58.46 0.00 0.00 0.00 0.00 0.00 00:34:39.659 [2024-11-06T11:41:11.274Z] =================================================================================================================== 00:34:39.659 [2024-11-06T11:41:11.274Z] Total : 14964.83 58.46 0.00 0.00 0.00 0.00 0.00 00:34:39.659 00:34:40.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.593 Nvme0n1 : 7.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:34:40.593 [2024-11-06T11:41:12.208Z] =================================================================================================================== 00:34:40.593 [2024-11-06T11:41:12.208Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:34:40.593 00:34:41.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.968 Nvme0n1 : 8.00 15001.88 58.60 0.00 0.00 0.00 0.00 0.00 00:34:41.968 [2024-11-06T11:41:13.583Z] =================================================================================================================== 00:34:41.968 [2024-11-06T11:41:13.583Z] Total : 15001.88 58.60 0.00 0.00 0.00 0.00 0.00 00:34:41.968 00:34:42.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:42.902 Nvme0n1 : 9.00 15014.22 58.65 0.00 0.00 0.00 0.00 0.00 00:34:42.902 [2024-11-06T11:41:14.517Z] =================================================================================================================== 00:34:42.902 [2024-11-06T11:41:14.517Z] Total : 15014.22 58.65 0.00 0.00 0.00 0.00 0.00 00:34:42.902 00:34:43.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:43.837 Nvme0n1 : 10.00 15030.50 58.71 0.00 0.00 0.00 0.00 0.00 00:34:43.837 [2024-11-06T11:41:15.452Z] =================================================================================================================== 00:34:43.837 [2024-11-06T11:41:15.452Z] Total : 15030.50 58.71 0.00 0.00 0.00 0.00 0.00 00:34:43.837 00:34:43.837 00:34:43.837 Latency(us) 00:34:43.837 [2024-11-06T11:41:15.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:43.837 Nvme0n1 : 10.01 15035.73 58.73 0.00 0.00 8508.95 5332.25 26333.56 00:34:43.837 [2024-11-06T11:41:15.452Z] =================================================================================================================== 00:34:43.837 [2024-11-06T11:41:15.452Z] Total : 15035.73 58.73 0.00 0.00 8508.95 5332.25 26333.56 00:34:43.837 { 00:34:43.837 "results": [ 00:34:43.837 { 00:34:43.837 "job": "Nvme0n1", 00:34:43.837 "core_mask": "0x2", 00:34:43.837 "workload": "randwrite", 00:34:43.837 "status": "finished", 00:34:43.837 "queue_depth": 128, 00:34:43.837 "io_size": 4096, 00:34:43.837 "runtime": 10.005033, 00:34:43.837 "iops": 15035.732515824786, 00:34:43.837 "mibps": 58.73333013994057, 00:34:43.837 "io_failed": 0, 00:34:43.837 "io_timeout": 0, 00:34:43.837 "avg_latency_us": 8508.94521030504, 00:34:43.837 "min_latency_us": 5332.2472727272725, 00:34:43.837 "max_latency_us": 26333.556363636362 00:34:43.837 } 00:34:43.837 ], 00:34:43.837 "core_count": 1 00:34:43.837 } 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 398290 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 398290 ']' 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 398290 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 398290 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 398290' 00:34:43.837 killing process with pid 398290 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 398290 00:34:43.837 Received shutdown signal, test time was about 10.000000 seconds 00:34:43.837 00:34:43.837 Latency(us) 00:34:43.837 [2024-11-06T11:41:15.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.837 [2024-11-06T11:41:15.452Z] =================================================================================================================== 00:34:43.837 [2024-11-06T11:41:15.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:43.837 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 398290 00:34:43.838 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.404 12:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.404 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:44.404 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:44.662 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:44.662 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:44.662 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 394861 00:34:44.662 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 394861 00:34:44.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 394861 Killed "${NVMF_APP[@]}" "$@" 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=400282 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 400282 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 400282 ']' 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:44.663 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:44.663 [2024-11-06 12:41:16.250989] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:44.663 [2024-11-06 12:41:16.252314] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:44.663 [2024-11-06 12:41:16.252358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.921 [2024-11-06 12:41:16.353242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.921 [2024-11-06 12:41:16.401281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.921 [2024-11-06 12:41:16.401322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.921 [2024-11-06 12:41:16.401333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.921 [2024-11-06 12:41:16.401342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.921 [2024-11-06 12:41:16.401350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.921 [2024-11-06 12:41:16.402040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.921 [2024-11-06 12:41:16.475799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:44.921 [2024-11-06 12:41:16.476103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.921 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:45.180 [2024-11-06 12:41:16.795696] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:45.180 [2024-11-06 12:41:16.795889] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:45.180 [2024-11-06 12:41:16.795974] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:45.438 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b172a32f-fba2-433f-b98f-4c127fd1c170 -t 2000 00:34:45.696 [ 00:34:45.696 { 00:34:45.696 "name": "b172a32f-fba2-433f-b98f-4c127fd1c170", 00:34:45.696 "aliases": [ 00:34:45.696 "lvs/lvol" 00:34:45.696 ], 00:34:45.696 "product_name": "Logical Volume", 00:34:45.696 "block_size": 4096, 00:34:45.697 "num_blocks": 38912, 00:34:45.697 "uuid": "b172a32f-fba2-433f-b98f-4c127fd1c170", 00:34:45.697 "assigned_rate_limits": { 00:34:45.697 "rw_ios_per_sec": 0, 00:34:45.697 "rw_mbytes_per_sec": 0, 00:34:45.697 "r_mbytes_per_sec": 0, 00:34:45.697 "w_mbytes_per_sec": 0 00:34:45.697 }, 00:34:45.697 "claimed": false, 00:34:45.697 "zoned": false, 00:34:45.697 "supported_io_types": { 00:34:45.697 "read": true, 00:34:45.697 "write": true, 00:34:45.697 "unmap": true, 00:34:45.697 "flush": false, 00:34:45.697 "reset": true, 00:34:45.697 "nvme_admin": false, 00:34:45.697 "nvme_io": false, 00:34:45.697 "nvme_io_md": false, 00:34:45.697 "write_zeroes": true, 00:34:45.697 "zcopy": false, 00:34:45.697 "get_zone_info": false, 00:34:45.697 "zone_management": false, 00:34:45.697 "zone_append": false, 00:34:45.697 "compare": false, 00:34:45.697 "compare_and_write": false, 00:34:45.697 "abort": false, 00:34:45.697 "seek_hole": true, 00:34:45.697 "seek_data": true, 00:34:45.697 "copy": false, 00:34:45.697 "nvme_iov_md": false 00:34:45.697 }, 00:34:45.697 "driver_specific": { 00:34:45.697 "lvol": { 00:34:45.697 "lvol_store_uuid": "268f35d3-75c3-42a2-816e-0da0462dea46", 00:34:45.697 "base_bdev": "aio_bdev", 00:34:45.697 "thin_provision": false, 00:34:45.697 "num_allocated_clusters": 38, 00:34:45.697 "snapshot": false, 00:34:45.697 "clone": false, 00:34:45.697 "esnap_clone": false 00:34:45.697 } 00:34:45.697 } 00:34:45.697 } 00:34:45.697 ] 00:34:45.697 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:34:45.697 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:45.697 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:45.955 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:45.955 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:45.955 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:46.213 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:46.213 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:46.471 [2024-11-06 12:41:17.846594] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:46.471 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:46.729 request: 00:34:46.729 { 00:34:46.729 "uuid": "268f35d3-75c3-42a2-816e-0da0462dea46", 00:34:46.729 "method": "bdev_lvol_get_lvstores", 00:34:46.729 "req_id": 1 00:34:46.729 } 00:34:46.729 Got JSON-RPC error response 00:34:46.729 response: 00:34:46.729 { 00:34:46.729 "code": -19, 00:34:46.729 "message": "No such device" 00:34:46.729 } 00:34:46.729 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:34:46.729 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:46.729 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:46.729 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:46.729 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:46.987 aio_bdev 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:46.987 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b172a32f-fba2-433f-b98f-4c127fd1c170 -t 2000 00:34:47.245 [ 00:34:47.245 { 00:34:47.245 "name": "b172a32f-fba2-433f-b98f-4c127fd1c170", 00:34:47.245 "aliases": [ 00:34:47.245 "lvs/lvol" 00:34:47.245 ], 00:34:47.245 "product_name": "Logical Volume", 00:34:47.245 "block_size": 4096, 00:34:47.245 "num_blocks": 38912, 00:34:47.245 "uuid": "b172a32f-fba2-433f-b98f-4c127fd1c170", 00:34:47.245 "assigned_rate_limits": { 00:34:47.245 "rw_ios_per_sec": 0, 00:34:47.245 "rw_mbytes_per_sec": 0, 00:34:47.245 "r_mbytes_per_sec": 0, 00:34:47.245 "w_mbytes_per_sec": 0 00:34:47.245 }, 00:34:47.245 "claimed": false, 00:34:47.245 "zoned": false, 00:34:47.245 "supported_io_types": { 00:34:47.245 "read": true, 00:34:47.245 "write": true, 00:34:47.245 "unmap": true, 00:34:47.245 "flush": false, 00:34:47.245 "reset": true, 00:34:47.245 "nvme_admin": false, 00:34:47.245 "nvme_io": false, 00:34:47.245 "nvme_io_md": false, 00:34:47.245 "write_zeroes": true, 00:34:47.245 "zcopy": false, 00:34:47.245 "get_zone_info": false, 00:34:47.245 "zone_management": false, 00:34:47.245 "zone_append": false, 00:34:47.245 "compare": false, 00:34:47.245 "compare_and_write": false, 00:34:47.245 "abort": false, 00:34:47.245 "seek_hole": true, 00:34:47.245 "seek_data": true, 00:34:47.245 "copy": false, 00:34:47.245 "nvme_iov_md": false 00:34:47.245 }, 00:34:47.245 "driver_specific": { 00:34:47.245 "lvol": { 00:34:47.245 "lvol_store_uuid": "268f35d3-75c3-42a2-816e-0da0462dea46", 00:34:47.245 "base_bdev": "aio_bdev", 00:34:47.245 "thin_provision": false, 00:34:47.245 "num_allocated_clusters": 38, 00:34:47.245 "snapshot": false, 00:34:47.245 "clone": false, 00:34:47.245 "esnap_clone": false 00:34:47.245 } 00:34:47.245 } 00:34:47.245 } 00:34:47.245 ] 00:34:47.503 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:34:47.503 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:47.503 12:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:47.503 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:47.503 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:47.503 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:47.761 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:47.761 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b172a32f-fba2-433f-b98f-4c127fd1c170 00:34:48.019 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 268f35d3-75c3-42a2-816e-0da0462dea46 00:34:48.276 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:48.534 00:34:48.534 real 0m17.897s 00:34:48.534 user 0m35.708s 00:34:48.534 sys 0m3.550s 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:48.534 ************************************ 00:34:48.534 END TEST lvs_grow_dirty 00:34:48.534 ************************************ 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:34:48.534 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:48.534 nvmf_trace.0 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.534 rmmod nvme_tcp 00:34:48.534 rmmod nvme_fabrics 00:34:48.534 rmmod nvme_keyring 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 400282 ']' 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 400282 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 400282 ']' 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 400282 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:48.534 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400282 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400282' 00:34:48.792 killing process with pid 400282 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 400282 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 400282 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.792 12:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.326 00:34:51.326 real 0m42.825s 00:34:51.326 user 0m54.386s 00:34:51.326 sys 0m9.090s 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:51.326 ************************************ 00:34:51.326 END TEST nvmf_lvs_grow 00:34:51.326 ************************************ 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:51.326 ************************************ 00:34:51.326 START TEST nvmf_bdev_io_wait 00:34:51.326 ************************************ 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:51.326 * Looking for test storage... 00:34:51.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.326 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.327 --rc genhtml_branch_coverage=1 00:34:51.327 --rc genhtml_function_coverage=1 00:34:51.327 --rc genhtml_legend=1 00:34:51.327 --rc geninfo_all_blocks=1 00:34:51.327 --rc geninfo_unexecuted_blocks=1 00:34:51.327 00:34:51.327 ' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.327 --rc genhtml_branch_coverage=1 00:34:51.327 --rc genhtml_function_coverage=1 00:34:51.327 --rc genhtml_legend=1 00:34:51.327 --rc geninfo_all_blocks=1 00:34:51.327 --rc geninfo_unexecuted_blocks=1 00:34:51.327 00:34:51.327 ' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.327 --rc genhtml_branch_coverage=1 00:34:51.327 --rc genhtml_function_coverage=1 00:34:51.327 --rc genhtml_legend=1 00:34:51.327 --rc geninfo_all_blocks=1 00:34:51.327 --rc geninfo_unexecuted_blocks=1 00:34:51.327 00:34:51.327 ' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.327 --rc genhtml_branch_coverage=1 00:34:51.327 --rc genhtml_function_coverage=1 00:34:51.327 --rc genhtml_legend=1 00:34:51.327 --rc geninfo_all_blocks=1 00:34:51.327 --rc geninfo_unexecuted_blocks=1 00:34:51.327 00:34:51.327 ' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.327 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.328 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.328 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.328 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.328 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:56.598 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:56.598 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:56.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:56.599 Found net devices under 0000:af:00.0: cvl_0_0 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:56.599 Found net devices under 0000:af:00.1: cvl_0_1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:34:56.599 00:34:56.599 --- 10.0.0.2 ping statistics --- 00:34:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.599 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:34:56.599 00:34:56.599 --- 10.0.0.1 ping statistics --- 00:34:56.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.599 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=404593 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 404593 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 404593 ']' 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.599 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:56.599 [2024-11-06 12:41:27.918782] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.599 [2024-11-06 12:41:27.920105] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:56.599 [2024-11-06 12:41:27.920148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.599 [2024-11-06 12:41:28.021639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.599 [2024-11-06 12:41:28.073100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.600 [2024-11-06 12:41:28.073142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.600 [2024-11-06 12:41:28.073152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.600 [2024-11-06 12:41:28.073161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.600 [2024-11-06 12:41:28.073168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.600 [2024-11-06 12:41:28.075167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.600 [2024-11-06 12:41:28.075273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.600 [2024-11-06 12:41:28.075375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.600 [2024-11-06 12:41:28.075380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.600 [2024-11-06 12:41:28.075733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.600 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.859 [2024-11-06 12:41:28.241721] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.859 [2024-11-06 12:41:28.241917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:56.859 [2024-11-06 12:41:28.242623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:56.859 [2024-11-06 12:41:28.243275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.859 [2024-11-06 12:41:28.248096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.859 Malloc0 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.859 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:56.860 [2024-11-06 12:41:28.300348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=404615 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=404616 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=404618 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=404620 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.860 { 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme$subsystem", 00:34:56.860 "trtype": "$TEST_TRANSPORT", 00:34:56.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "$NVMF_PORT", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.860 "hdgst": ${hdgst:-false}, 00:34:56.860 "ddgst": ${ddgst:-false} 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 } 00:34:56.860 EOF 00:34:56.860 )") 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.860 { 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme$subsystem", 00:34:56.860 "trtype": "$TEST_TRANSPORT", 00:34:56.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "$NVMF_PORT", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.860 "hdgst": ${hdgst:-false}, 00:34:56.860 "ddgst": ${ddgst:-false} 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 } 00:34:56.860 EOF 00:34:56.860 )") 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.860 { 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme$subsystem", 00:34:56.860 "trtype": "$TEST_TRANSPORT", 00:34:56.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "$NVMF_PORT", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.860 "hdgst": ${hdgst:-false}, 00:34:56.860 "ddgst": ${ddgst:-false} 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 } 00:34:56.860 EOF 00:34:56.860 )") 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 404615 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.860 { 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme$subsystem", 00:34:56.860 "trtype": "$TEST_TRANSPORT", 00:34:56.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "$NVMF_PORT", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.860 "hdgst": ${hdgst:-false}, 00:34:56.860 "ddgst": ${ddgst:-false} 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 } 00:34:56.860 EOF 00:34:56.860 )") 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme1", 00:34:56.860 "trtype": "tcp", 00:34:56.860 "traddr": "10.0.0.2", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "4420", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.860 "hdgst": false, 00:34:56.860 "ddgst": false 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 }' 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme1", 00:34:56.860 "trtype": "tcp", 00:34:56.860 "traddr": "10.0.0.2", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "4420", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.860 "hdgst": false, 00:34:56.860 "ddgst": false 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 }' 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme1", 00:34:56.860 "trtype": "tcp", 00:34:56.860 "traddr": "10.0.0.2", 00:34:56.860 "adrfam": "ipv4", 00:34:56.860 "trsvcid": "4420", 00:34:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.860 "hdgst": false, 00:34:56.860 "ddgst": false 00:34:56.860 }, 00:34:56.860 "method": "bdev_nvme_attach_controller" 00:34:56.860 }' 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:56.860 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.860 "params": { 00:34:56.860 "name": "Nvme1", 00:34:56.861 "trtype": "tcp", 00:34:56.861 "traddr": "10.0.0.2", 00:34:56.861 "adrfam": "ipv4", 00:34:56.861 "trsvcid": "4420", 00:34:56.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.861 "hdgst": false, 00:34:56.861 "ddgst": false 00:34:56.861 }, 00:34:56.861 "method": "bdev_nvme_attach_controller" 00:34:56.861 }' 00:34:56.861 [2024-11-06 12:41:28.357085] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:56.861 [2024-11-06 12:41:28.357084] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:56.861 [2024-11-06 12:41:28.357147] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 12:41:28.357148] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:56.861 --proc-type=auto ] 00:34:56.861 [2024-11-06 12:41:28.359548] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:56.861 [2024-11-06 12:41:28.359550] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:34:56.861 [2024-11-06 12:41:28.359606] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 12:41:28.359608] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:56.861 --proc-type=auto ] 00:34:57.119 [2024-11-06 12:41:28.568472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.119 [2024-11-06 12:41:28.618095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:57.119 [2024-11-06 12:41:28.632052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.119 [2024-11-06 12:41:28.681830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:57.119 [2024-11-06 12:41:28.722528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.377 [2024-11-06 12:41:28.772366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:57.377 [2024-11-06 12:41:28.782673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.377 [2024-11-06 12:41:28.832101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:57.377 Running I/O for 1 seconds... 00:34:57.377 Running I/O for 1 seconds... 00:34:57.377 Running I/O for 1 seconds... 00:34:57.635 Running I/O for 1 seconds... 00:34:58.567 15276.00 IOPS, 59.67 MiB/s 00:34:58.567 Latency(us) 00:34:58.567 [2024-11-06T11:41:30.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.568 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:58.568 Nvme1n1 : 1.01 15341.81 59.93 0.00 0.00 8322.24 1362.85 9711.24 00:34:58.568 [2024-11-06T11:41:30.183Z] =================================================================================================================== 00:34:58.568 [2024-11-06T11:41:30.183Z] Total : 15341.81 59.93 0.00 0.00 8322.24 1362.85 9711.24 00:34:58.568 4987.00 IOPS, 19.48 MiB/s 00:34:58.568 Latency(us) 00:34:58.568 [2024-11-06T11:41:30.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.568 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:58.568 Nvme1n1 : 1.02 5036.04 19.67 0.00 0.00 25264.44 2904.44 35985.22 00:34:58.568 [2024-11-06T11:41:30.183Z] =================================================================================================================== 00:34:58.568 [2024-11-06T11:41:30.183Z] Total : 5036.04 19.67 0.00 0.00 25264.44 2904.44 35985.22 00:34:58.568 163088.00 IOPS, 637.06 MiB/s 00:34:58.568 Latency(us) 00:34:58.568 [2024-11-06T11:41:30.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.568 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:58.568 Nvme1n1 : 1.00 162710.38 635.59 0.00 0.00 781.87 344.44 2308.65 00:34:58.568 [2024-11-06T11:41:30.183Z] =================================================================================================================== 00:34:58.568 [2024-11-06T11:41:30.183Z] Total : 162710.38 635.59 0.00 0.00 781.87 344.44 2308.65 00:34:58.568 5095.00 IOPS, 19.90 MiB/s 00:34:58.568 Latency(us) 00:34:58.568 [2024-11-06T11:41:30.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.568 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:58.568 Nvme1n1 : 1.01 5190.60 20.28 0.00 0.00 24570.11 5540.77 47900.86 00:34:58.568 [2024-11-06T11:41:30.183Z] =================================================================================================================== 00:34:58.568 [2024-11-06T11:41:30.183Z] Total : 5190.60 20.28 0.00 0.00 24570.11 5540.77 47900.86 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 404616 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 404618 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 404620 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.568 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.568 rmmod nvme_tcp 00:34:58.568 rmmod nvme_fabrics 00:34:58.826 rmmod nvme_keyring 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 404593 ']' 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 404593 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 404593 ']' 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 404593 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 404593 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 404593' 00:34:58.826 killing process with pid 404593 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 404593 00:34:58.826 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 404593 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.084 12:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.987 00:35:00.987 real 0m10.063s 00:35:00.987 user 0m14.686s 00:35:00.987 sys 0m5.851s 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:00.987 ************************************ 00:35:00.987 END TEST nvmf_bdev_io_wait 00:35:00.987 ************************************ 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:00.987 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:01.247 ************************************ 00:35:01.247 START TEST nvmf_queue_depth 00:35:01.247 ************************************ 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:01.247 * Looking for test storage... 00:35:01.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:01.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.247 --rc genhtml_branch_coverage=1 00:35:01.247 --rc genhtml_function_coverage=1 00:35:01.247 --rc genhtml_legend=1 00:35:01.247 --rc geninfo_all_blocks=1 00:35:01.247 --rc geninfo_unexecuted_blocks=1 00:35:01.247 00:35:01.247 ' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:01.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.247 --rc genhtml_branch_coverage=1 00:35:01.247 --rc genhtml_function_coverage=1 00:35:01.247 --rc genhtml_legend=1 00:35:01.247 --rc geninfo_all_blocks=1 00:35:01.247 --rc geninfo_unexecuted_blocks=1 00:35:01.247 00:35:01.247 ' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:01.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.247 --rc genhtml_branch_coverage=1 00:35:01.247 --rc genhtml_function_coverage=1 00:35:01.247 --rc genhtml_legend=1 00:35:01.247 --rc geninfo_all_blocks=1 00:35:01.247 --rc geninfo_unexecuted_blocks=1 00:35:01.247 00:35:01.247 ' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:01.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.247 --rc genhtml_branch_coverage=1 00:35:01.247 --rc genhtml_function_coverage=1 00:35:01.247 --rc genhtml_legend=1 00:35:01.247 --rc geninfo_all_blocks=1 00:35:01.247 --rc geninfo_unexecuted_blocks=1 00:35:01.247 00:35:01.247 ' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.247 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:01.248 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:06.513 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:06.513 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:06.514 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:06.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:06.514 Found net devices under 0000:af:00.0: cvl_0_0 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:06.514 Found net devices under 0000:af:00.1: cvl_0_1 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.514 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:06.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:35:06.773 00:35:06.773 --- 10.0.0.2 ping statistics --- 00:35:06.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.773 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:35:06.773 00:35:06.773 --- 10.0.0.1 ping statistics --- 00:35:06.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.773 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=408620 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 408620 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 408620 ']' 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:06.773 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:06.773 [2024-11-06 12:41:38.372945] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:06.773 [2024-11-06 12:41:38.374289] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:35:06.773 [2024-11-06 12:41:38.374333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.032 [2024-11-06 12:41:38.449397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.032 [2024-11-06 12:41:38.489546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.032 [2024-11-06 12:41:38.489579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.032 [2024-11-06 12:41:38.489585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.032 [2024-11-06 12:41:38.489591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.032 [2024-11-06 12:41:38.489595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.032 [2024-11-06 12:41:38.490157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.032 [2024-11-06 12:41:38.555713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:07.032 [2024-11-06 12:41:38.555917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.032 [2024-11-06 12:41:38.642611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.032 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.291 Malloc0 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.291 [2024-11-06 12:41:38.694636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=408642 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 408642 /var/tmp/bdevperf.sock 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 408642 ']' 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:07.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:07.291 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.291 [2024-11-06 12:41:38.750706] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:35:07.291 [2024-11-06 12:41:38.750762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408642 ] 00:35:07.291 [2024-11-06 12:41:38.846546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.291 [2024-11-06 12:41:38.895364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.550 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.550 NVMe0n1 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.550 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:07.808 Running I/O for 10 seconds... 00:35:09.677 10121.00 IOPS, 39.54 MiB/s [2024-11-06T11:41:42.665Z] 10243.50 IOPS, 40.01 MiB/s [2024-11-06T11:41:43.599Z] 10461.00 IOPS, 40.86 MiB/s [2024-11-06T11:41:44.533Z] 10507.50 IOPS, 41.04 MiB/s [2024-11-06T11:41:45.467Z] 10622.20 IOPS, 41.49 MiB/s [2024-11-06T11:41:46.403Z] 10654.17 IOPS, 41.62 MiB/s [2024-11-06T11:41:47.337Z] 10690.43 IOPS, 41.76 MiB/s [2024-11-06T11:41:48.710Z] 10729.38 IOPS, 41.91 MiB/s [2024-11-06T11:41:49.645Z] 10701.44 IOPS, 41.80 MiB/s [2024-11-06T11:41:49.645Z] 10725.50 IOPS, 41.90 MiB/s 00:35:18.030 Latency(us) 00:35:18.030 [2024-11-06T11:41:49.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.030 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:18.030 Verification LBA range: start 0x0 length 0x4000 00:35:18.030 NVMe0n1 : 10.07 10752.60 42.00 0.00 0.00 94837.40 24784.52 64344.44 00:35:18.030 [2024-11-06T11:41:49.645Z] =================================================================================================================== 00:35:18.030 [2024-11-06T11:41:49.645Z] Total : 10752.60 42.00 0.00 0.00 94837.40 24784.52 64344.44 00:35:18.030 { 00:35:18.030 "results": [ 00:35:18.030 { 00:35:18.030 "job": "NVMe0n1", 00:35:18.030 "core_mask": "0x1", 00:35:18.030 "workload": "verify", 00:35:18.030 "status": "finished", 00:35:18.030 "verify_range": { 00:35:18.030 "start": 0, 00:35:18.030 "length": 16384 00:35:18.030 }, 00:35:18.030 "queue_depth": 1024, 00:35:18.030 "io_size": 4096, 00:35:18.030 "runtime": 10.066682, 00:35:18.030 "iops": 10752.599515908023, 00:35:18.030 "mibps": 42.002341859015715, 00:35:18.030 "io_failed": 0, 00:35:18.030 "io_timeout": 0, 00:35:18.030 "avg_latency_us": 94837.39502993684, 00:35:18.030 "min_latency_us": 24784.523636363636, 00:35:18.030 "max_latency_us": 64344.43636363636 00:35:18.030 } 00:35:18.030 ], 00:35:18.030 "core_count": 1 00:35:18.030 } 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 408642 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 408642 ']' 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 408642 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 408642 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 408642' 00:35:18.030 killing process with pid 408642 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 408642 00:35:18.030 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.030 00:35:18.030 Latency(us) 00:35:18.030 [2024-11-06T11:41:49.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.030 [2024-11-06T11:41:49.645Z] =================================================================================================================== 00:35:18.030 [2024-11-06T11:41:49.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 408642 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.030 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.030 rmmod nvme_tcp 00:35:18.288 rmmod nvme_fabrics 00:35:18.288 rmmod nvme_keyring 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 408620 ']' 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 408620 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 408620 ']' 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 408620 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 408620 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 408620' 00:35:18.288 killing process with pid 408620 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 408620 00:35:18.288 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 408620 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.547 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.626 00:35:20.626 real 0m19.423s 00:35:20.626 user 0m22.823s 00:35:20.626 sys 0m6.234s 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:20.626 ************************************ 00:35:20.626 END TEST nvmf_queue_depth 00:35:20.626 ************************************ 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:20.626 ************************************ 00:35:20.626 START TEST nvmf_target_multipath 00:35:20.626 ************************************ 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:20.626 * Looking for test storage... 00:35:20.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:35:20.626 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:20.886 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:20.886 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.886 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:20.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.887 --rc genhtml_branch_coverage=1 00:35:20.887 --rc genhtml_function_coverage=1 00:35:20.887 --rc genhtml_legend=1 00:35:20.887 --rc geninfo_all_blocks=1 00:35:20.887 --rc geninfo_unexecuted_blocks=1 00:35:20.887 00:35:20.887 ' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:20.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.887 --rc genhtml_branch_coverage=1 00:35:20.887 --rc genhtml_function_coverage=1 00:35:20.887 --rc genhtml_legend=1 00:35:20.887 --rc geninfo_all_blocks=1 00:35:20.887 --rc geninfo_unexecuted_blocks=1 00:35:20.887 00:35:20.887 ' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:20.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.887 --rc genhtml_branch_coverage=1 00:35:20.887 --rc genhtml_function_coverage=1 00:35:20.887 --rc genhtml_legend=1 00:35:20.887 --rc geninfo_all_blocks=1 00:35:20.887 --rc geninfo_unexecuted_blocks=1 00:35:20.887 00:35:20.887 ' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:20.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.887 --rc genhtml_branch_coverage=1 00:35:20.887 --rc genhtml_function_coverage=1 00:35:20.887 --rc genhtml_legend=1 00:35:20.887 --rc geninfo_all_blocks=1 00:35:20.887 --rc geninfo_unexecuted_blocks=1 00:35:20.887 00:35:20.887 ' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:20.887 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:20.888 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.155 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:26.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:26.156 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:26.156 Found net devices under 0000:af:00.0: cvl_0_0 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:26.156 Found net devices under 0000:af:00.1: cvl_0_1 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.156 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:35:26.416 00:35:26.416 --- 10.0.0.2 ping statistics --- 00:35:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.416 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:35:26.416 00:35:26.416 --- 10.0.0.1 ping statistics --- 00:35:26.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.416 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:26.416 only one NIC for nvmf test 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:26.416 rmmod nvme_tcp 00:35:26.416 rmmod nvme_fabrics 00:35:26.416 rmmod nvme_keyring 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.416 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:26.416 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:26.416 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:26.416 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.416 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.416 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.952 00:35:28.952 real 0m8.025s 00:35:28.952 user 0m1.691s 00:35:28.952 sys 0m4.356s 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:28.952 ************************************ 00:35:28.952 END TEST nvmf_target_multipath 00:35:28.952 ************************************ 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:28.952 ************************************ 00:35:28.952 START TEST nvmf_zcopy 00:35:28.952 ************************************ 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:28.952 * Looking for test storage... 00:35:28.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:28.952 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.953 --rc genhtml_branch_coverage=1 00:35:28.953 --rc genhtml_function_coverage=1 00:35:28.953 --rc genhtml_legend=1 00:35:28.953 --rc geninfo_all_blocks=1 00:35:28.953 --rc geninfo_unexecuted_blocks=1 00:35:28.953 00:35:28.953 ' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.953 --rc genhtml_branch_coverage=1 00:35:28.953 --rc genhtml_function_coverage=1 00:35:28.953 --rc genhtml_legend=1 00:35:28.953 --rc geninfo_all_blocks=1 00:35:28.953 --rc geninfo_unexecuted_blocks=1 00:35:28.953 00:35:28.953 ' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.953 --rc genhtml_branch_coverage=1 00:35:28.953 --rc genhtml_function_coverage=1 00:35:28.953 --rc genhtml_legend=1 00:35:28.953 --rc geninfo_all_blocks=1 00:35:28.953 --rc geninfo_unexecuted_blocks=1 00:35:28.953 00:35:28.953 ' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.953 --rc genhtml_branch_coverage=1 00:35:28.953 --rc genhtml_function_coverage=1 00:35:28.953 --rc genhtml_legend=1 00:35:28.953 --rc geninfo_all_blocks=1 00:35:28.953 --rc geninfo_unexecuted_blocks=1 00:35:28.953 00:35:28.953 ' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.953 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.223 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:34.224 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:34.224 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:34.224 Found net devices under 0000:af:00.0: cvl_0_0 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:34.224 Found net devices under 0000:af:00.1: cvl_0_1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:35:34.224 00:35:34.224 --- 10.0.0.2 ping statistics --- 00:35:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.224 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:35:34.224 00:35:34.224 --- 10.0.0.1 ping statistics --- 00:35:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.224 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=417779 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 417779 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 417779 ']' 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.224 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.225 [2024-11-06 12:42:05.590985] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:34.225 [2024-11-06 12:42:05.592325] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:35:34.225 [2024-11-06 12:42:05.592367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.225 [2024-11-06 12:42:05.663726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.225 [2024-11-06 12:42:05.702157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.225 [2024-11-06 12:42:05.702192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.225 [2024-11-06 12:42:05.702199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.225 [2024-11-06 12:42:05.702209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.225 [2024-11-06 12:42:05.702215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.225 [2024-11-06 12:42:05.702746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.225 [2024-11-06 12:42:05.767118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:34.225 [2024-11-06 12:42:05.767320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:34.225 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 [2024-11-06 12:42:05.859343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 [2024-11-06 12:42:05.883633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 malloc0 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:34.483 { 00:35:34.483 "params": { 00:35:34.483 "name": "Nvme$subsystem", 00:35:34.483 "trtype": "$TEST_TRANSPORT", 00:35:34.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.483 "adrfam": "ipv4", 00:35:34.483 "trsvcid": "$NVMF_PORT", 00:35:34.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.483 "hdgst": ${hdgst:-false}, 00:35:34.483 "ddgst": ${ddgst:-false} 00:35:34.483 }, 00:35:34.483 "method": "bdev_nvme_attach_controller" 00:35:34.483 } 00:35:34.483 EOF 00:35:34.483 )") 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:34.483 12:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:34.483 "params": { 00:35:34.483 "name": "Nvme1", 00:35:34.483 "trtype": "tcp", 00:35:34.483 "traddr": "10.0.0.2", 00:35:34.483 "adrfam": "ipv4", 00:35:34.484 "trsvcid": "4420", 00:35:34.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.484 "hdgst": false, 00:35:34.484 "ddgst": false 00:35:34.484 }, 00:35:34.484 "method": "bdev_nvme_attach_controller" 00:35:34.484 }' 00:35:34.484 [2024-11-06 12:42:05.979847] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:35:34.484 [2024-11-06 12:42:05.979906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417808 ] 00:35:34.484 [2024-11-06 12:42:06.074546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.742 [2024-11-06 12:42:06.123993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.001 Running I/O for 10 seconds... 00:35:36.871 8305.00 IOPS, 64.88 MiB/s [2024-11-06T11:42:09.421Z] 8357.50 IOPS, 65.29 MiB/s [2024-11-06T11:42:10.795Z] 8381.67 IOPS, 65.48 MiB/s [2024-11-06T11:42:11.729Z] 8384.25 IOPS, 65.50 MiB/s [2024-11-06T11:42:12.663Z] 8388.80 IOPS, 65.54 MiB/s [2024-11-06T11:42:13.598Z] 8394.67 IOPS, 65.58 MiB/s [2024-11-06T11:42:14.533Z] 8399.86 IOPS, 65.62 MiB/s [2024-11-06T11:42:15.467Z] 8402.25 IOPS, 65.64 MiB/s [2024-11-06T11:42:16.842Z] 8404.22 IOPS, 65.66 MiB/s 00:35:45.227 Latency(us) 00:35:45.227 [2024-11-06T11:42:16.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:45.227 Verification LBA range: start 0x0 length 0x1000 00:35:45.227 Nvme1n1 : 10.01 8402.93 65.65 0.00 0.00 15170.20 878.78 22043.93 00:35:45.227 [2024-11-06T11:42:16.842Z] =================================================================================================================== 00:35:45.227 [2024-11-06T11:42:16.842Z] Total : 8402.93 65.65 0.00 0.00 15170.20 878.78 22043.93 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=420015 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.227 { 00:35:45.227 "params": { 00:35:45.227 "name": "Nvme$subsystem", 00:35:45.227 "trtype": "$TEST_TRANSPORT", 00:35:45.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.227 "adrfam": "ipv4", 00:35:45.227 "trsvcid": "$NVMF_PORT", 00:35:45.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.227 "hdgst": ${hdgst:-false}, 00:35:45.227 "ddgst": ${ddgst:-false} 00:35:45.227 }, 00:35:45.227 "method": "bdev_nvme_attach_controller" 00:35:45.227 } 00:35:45.227 EOF 00:35:45.227 )") 00:35:45.227 [2024-11-06 12:42:16.619067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.227 [2024-11-06 12:42:16.619096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:45.227 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:45.227 "params": { 00:35:45.227 "name": "Nvme1", 00:35:45.227 "trtype": "tcp", 00:35:45.227 "traddr": "10.0.0.2", 00:35:45.227 "adrfam": "ipv4", 00:35:45.227 "trsvcid": "4420", 00:35:45.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.227 "hdgst": false, 00:35:45.227 "ddgst": false 00:35:45.227 }, 00:35:45.227 "method": "bdev_nvme_attach_controller" 00:35:45.227 }' 00:35:45.227 [2024-11-06 12:42:16.631033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.227 [2024-11-06 12:42:16.631045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.227 [2024-11-06 12:42:16.643028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.643036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.655030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.655037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.665899] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:35:45.228 [2024-11-06 12:42:16.665954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420015 ] 00:35:45.228 [2024-11-06 12:42:16.667033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.667043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.679029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.679039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.691032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.691041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.703032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.703045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.715031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.715039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.727027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.727035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.739026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.739034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.751031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.751039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.760398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.228 [2024-11-06 12:42:16.763041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.763054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.775034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.775045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.787029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.787037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.799029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.799037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.809326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.228 [2024-11-06 12:42:16.811030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.811042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.823037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.823052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.228 [2024-11-06 12:42:16.835034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.228 [2024-11-06 12:42:16.835051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.486 [2024-11-06 12:42:16.847032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.847043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.859030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.859041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.871032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.871042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.883028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.883038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.895029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.895038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.907043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.907062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.919036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.919053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.931034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.931047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.943030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.943038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.955028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.955036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.967030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.967042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.979031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.979044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:16.991029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:16.991038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.003030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.003040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.015029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.015039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.027032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.027045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.039031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.039040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.051030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.051040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.063032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.063045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.075033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.075045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.087031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.087039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.487 [2024-11-06 12:42:17.099030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.487 [2024-11-06 12:42:17.099038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.745 [2024-11-06 12:42:17.111032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.111043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.123039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.123055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 Running I/O for 5 seconds... 00:35:45.746 [2024-11-06 12:42:17.139969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.139987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.154918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.154941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.167512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.167529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.180271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.180288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.194599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.194617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.208533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.208551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.222340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.222359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.236104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.236122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.248570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.248587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.263013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.263031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.276026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.276044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.286886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.286904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.300297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.300314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.314667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.314685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.328423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.328440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.343090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.343108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.746 [2024-11-06 12:42:17.356732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.746 [2024-11-06 12:42:17.356749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.371156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.371173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.384845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.384862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.398512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.398530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.412402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.412423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.426593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.426611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.440322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.440341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.454467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.454486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.468234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.468251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.482738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.482756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.004 [2024-11-06 12:42:17.496452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.004 [2024-11-06 12:42:17.496476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.510717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.510735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.524116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.524133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.535909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.535926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.548540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.548557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.562494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.562511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.575798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.575815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.588311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.588333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.602864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.602881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.005 [2024-11-06 12:42:17.616250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.005 [2024-11-06 12:42:17.616267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.630356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.630372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.644274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.644291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.657912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.657929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.671383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.671399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.684091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.684107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.696602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.696619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.710639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.710656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.724192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.724209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.738284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.738301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.751642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.751659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.763252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.763273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.776376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.776394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.786587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.786604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.800059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.800076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.814546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.814564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.827972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.827989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.839356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.839373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.852237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.852253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.263 [2024-11-06 12:42:17.866729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.263 [2024-11-06 12:42:17.866747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.880078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.880095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.891593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.891610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.904420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.904437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.918851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.918869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.932629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.932646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.946566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.946582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.959939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.959956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.974151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.974168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.987815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.987832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:17.999118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:17.999135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.011981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.011997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.024381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.024399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.038273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.038291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.051785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.051802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.063485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.063501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.076691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.076708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.090757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.090774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.104006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.104023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.115352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.115368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.522 [2024-11-06 12:42:18.128476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.522 [2024-11-06 12:42:18.128493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 18276.00 IOPS, 142.78 MiB/s [2024-11-06T11:42:18.395Z] [2024-11-06 12:42:18.142595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.142613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.156193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.156214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.170372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.170389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.184132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.184149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.198379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.198397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.212050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.212067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.226496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.226514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.240113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.240130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.251398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.251414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.264545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.264562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.278756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.278773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.292073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.292090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.306437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.306454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.319855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.319872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.780 [2024-11-06 12:42:18.331615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.780 [2024-11-06 12:42:18.331632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.781 [2024-11-06 12:42:18.346700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.781 [2024-11-06 12:42:18.346717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.781 [2024-11-06 12:42:18.360565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.781 [2024-11-06 12:42:18.360582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.781 [2024-11-06 12:42:18.375036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.781 [2024-11-06 12:42:18.375053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.781 [2024-11-06 12:42:18.388523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.781 [2024-11-06 12:42:18.388541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.402281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.402298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.415820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.415841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.428239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.428256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.440501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.440519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.454518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.454535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.468324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.468341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.482394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.482411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.496651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.496668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.511288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.511304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.526484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.526502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.539999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.540016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.554805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.554823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.568320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.568338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.581841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.581859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.595921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.595938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.610627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.610645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.624326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.624344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.634508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.634525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.039 [2024-11-06 12:42:18.648381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.039 [2024-11-06 12:42:18.648399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.662405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.662423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.675948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.675970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.688505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.688523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.702539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.702557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.716026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.716043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.728182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.728204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.742765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.742783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.756491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.756508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.770687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.770704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.784234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.784251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.798452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.798477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.812412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.812430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.827129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.827147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.840805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.840823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.854952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.854971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.868449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.868472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.882587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.882605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.895988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.896004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.298 [2024-11-06 12:42:18.910574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.298 [2024-11-06 12:42:18.910591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.924344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.924362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.938353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.938377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.952347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.952365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.966590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.966608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.980514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.980532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:18.994015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:18.994036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:19.007198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:19.007215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:19.020276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:19.020293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.556 [2024-11-06 12:42:19.034787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.556 [2024-11-06 12:42:19.034804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.048088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.048106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.062632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.062651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.076310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.076327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.090580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.090597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.104134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.104151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.115000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.115017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.127950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.127967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 18277.50 IOPS, 142.79 MiB/s [2024-11-06T11:42:19.172Z] [2024-11-06 12:42:19.140589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.140606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.150170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.150187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.557 [2024-11-06 12:42:19.164124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.557 [2024-11-06 12:42:19.164141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.175546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.175562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.188294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.188310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.202926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.202944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.216584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.216601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.230474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.230490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.243845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.243862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.259076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.259092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.272519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.272535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.286295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.286312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.299501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.299517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.312308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.312325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.814 [2024-11-06 12:42:19.327000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.814 [2024-11-06 12:42:19.327017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.339413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.339430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.351929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.351945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.363313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.363329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.376475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.376492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.390487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.390503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.403851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.403867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.815 [2024-11-06 12:42:19.418785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.815 [2024-11-06 12:42:19.418803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.432186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.432203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.446339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.446356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.459963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.459980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.474553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.474570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.488505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.488523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.499947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.499964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.512320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.512337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.526706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.526723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.540216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.540232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.554918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.554935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.568522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.568539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.582556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.582572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.596323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.596340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.610634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.610651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.624019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.624035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.638892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.638910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.652466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.652483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.666293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.666310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.073 [2024-11-06 12:42:19.679551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.073 [2024-11-06 12:42:19.679568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.694741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.694763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.708456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.708477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.722225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.722243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.735751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.735768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.747568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.747585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.760494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.760511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.770776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.770792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.784042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.784060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.798464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.798481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.811612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.811628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.823400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.823416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.836484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.836501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.850704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.850721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.864304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.864322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.878546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.878565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.892043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.892061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.906870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.906888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.920672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.920690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.934448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.934471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.332 [2024-11-06 12:42:19.948266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.332 [2024-11-06 12:42:19.948287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:19.959945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:19.959962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:19.972187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:19.972203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:19.987312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:19.987329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.002663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.002681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.017986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.018004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.031515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.031532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.045184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.045204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.059438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.059455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.074190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.074207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.087812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.087831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.101383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.101400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.115221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.591 [2024-11-06 12:42:20.115240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.591 [2024-11-06 12:42:20.128187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.128206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.592 [2024-11-06 12:42:20.139224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.139242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.592 18286.67 IOPS, 142.86 MiB/s [2024-11-06T11:42:20.207Z] [2024-11-06 12:42:20.152476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.152493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.592 [2024-11-06 12:42:20.166751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.166769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.592 [2024-11-06 12:42:20.180592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.180610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.592 [2024-11-06 12:42:20.195310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.592 [2024-11-06 12:42:20.195327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.210508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.210531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.224308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.224326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.238760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.238777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.252238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.252256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.266690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.266707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.280208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.280225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.295064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.295082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.307679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.307697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.320331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.320349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.334036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.334055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.347621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.347638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.362835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.362852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.376366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.376384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.390624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.390642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.404296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.404314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.418640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.418658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.850 [2024-11-06 12:42:20.432355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.850 [2024-11-06 12:42:20.432372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.851 [2024-11-06 12:42:20.446539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.851 [2024-11-06 12:42:20.446557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:48.851 [2024-11-06 12:42:20.460020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:48.851 [2024-11-06 12:42:20.460038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.472426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.472447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.486624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.486642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.499682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.499699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.511301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.511319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.524483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.524501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.538422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.538440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.552269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.552286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.566880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.566899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.580792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.580809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.594817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.594835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.608159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.109 [2024-11-06 12:42:20.608176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.109 [2024-11-06 12:42:20.621710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.621728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.635030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.635047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.648405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.648421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.662868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.662886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.676586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.676604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.691234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.691251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.704536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.704554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.110 [2024-11-06 12:42:20.719263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.110 [2024-11-06 12:42:20.719280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.732661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.732677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.746588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.746605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.760242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.760259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.774916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.774933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.788573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.788590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.803199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.803217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.815626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.815643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.830853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.830870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.844770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.844787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.858692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.858709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.872195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.872212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.886474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.886491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.900005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.900021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.911139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.911156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.924521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.924538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.938478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.938496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.952305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.952322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.966882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.966899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.368 [2024-11-06 12:42:20.980566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.368 [2024-11-06 12:42:20.980583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:20.995057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:20.995074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.008218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.008236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.022692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.022710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.036352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.036369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.050216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.050232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.063249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.063266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.075268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.075285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.088403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.088421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.102183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.102200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.115647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.115664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.127405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.127422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.140685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.140702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 18288.75 IOPS, 142.88 MiB/s [2024-11-06T11:42:21.242Z] [2024-11-06 12:42:21.151042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.151059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.164525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.164542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.178868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.178885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.192294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.192311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.206443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.206466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.220048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.220066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.627 [2024-11-06 12:42:21.234688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.627 [2024-11-06 12:42:21.234712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.248706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.248723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.262992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.263008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.276396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.276413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.290999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.291016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.304588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.304605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.318999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.319016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.332542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.885 [2024-11-06 12:42:21.332559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.885 [2024-11-06 12:42:21.346285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.346302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.359738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.359755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.374451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.374473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.388713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.388730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.402624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.402641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.416616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.416632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.430399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.430416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.443441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.443457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.455437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.455454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.468514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.468531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.482471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.482489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:49.886 [2024-11-06 12:42:21.496103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:49.886 [2024-11-06 12:42:21.496124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.510402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.510419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.524129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.524146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.538675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.538692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.552646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.552664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.566630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.566648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.580221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.580238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.594261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.594278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.608260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.608277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.622116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.622134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.635566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.635583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.648196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.648214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.662881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.662900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.676910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.676930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.691635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.691653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.706999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.707016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.720586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.720605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.734842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.734860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.144 [2024-11-06 12:42:21.748450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.144 [2024-11-06 12:42:21.748474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.762711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.762733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.776202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.776219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.790280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.790298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.804054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.804072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.818252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.818270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.832123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.832140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.846693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.846711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.859918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.859935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.874643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.874660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.887951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.887968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.900268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.900285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.914565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.914584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.928183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.928201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.943184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.943202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.955617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.955633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.968500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.968517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.980034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.980052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:21.992844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:21.992862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:22.007288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:22.007305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.403 [2024-11-06 12:42:22.019482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.403 [2024-11-06 12:42:22.019503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.661 [2024-11-06 12:42:22.032539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.032559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.046719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.046736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.060389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.060406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.074581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.074599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.088319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.088339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.102313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.102331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.116012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.116029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.131163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.131180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.144274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.144293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 18271.20 IOPS, 142.74 MiB/s 00:35:50.662 Latency(us) 00:35:50.662 [2024-11-06T11:42:22.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.662 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:50.662 Nvme1n1 : 5.01 18271.76 142.75 0.00 0.00 6997.89 2293.76 14477.50 00:35:50.662 [2024-11-06T11:42:22.277Z] =================================================================================================================== 00:35:50.662 [2024-11-06T11:42:22.277Z] Total : 18271.76 142.75 0.00 0.00 6997.89 2293.76 14477.50 00:35:50.662 [2024-11-06 12:42:22.155038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.155054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.167035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.167049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.179038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.179051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.191039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.191055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.203036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.203048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.215030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.215041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.227031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.227041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.239031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.239042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.251030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.251041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.263030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.263041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.662 [2024-11-06 12:42:22.275029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.662 [2024-11-06 12:42:22.275037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.920 [2024-11-06 12:42:22.287031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.920 [2024-11-06 12:42:22.287043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.920 [2024-11-06 12:42:22.299029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.920 [2024-11-06 12:42:22.299038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.920 [2024-11-06 12:42:22.311029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.920 [2024-11-06 12:42:22.311037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.920 [2024-11-06 12:42:22.323033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:50.920 [2024-11-06 12:42:22.323042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:50.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (420015) - No such process 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 420015 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:50.920 delay0 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.920 12:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:50.920 [2024-11-06 12:42:22.509631] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:59.029 Initializing NVMe Controllers 00:35:59.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:59.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:59.029 Initialization complete. Launching workers. 00:35:59.030 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 253, failed: 28221 00:35:59.030 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28354, failed to submit 120 00:35:59.030 success 28243, unsuccessful 111, failed 0 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.030 rmmod nvme_tcp 00:35:59.030 rmmod nvme_fabrics 00:35:59.030 rmmod nvme_keyring 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 417779 ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 417779 ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 417779' 00:35:59.030 killing process with pid 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 417779 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.030 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.935 00:36:00.935 real 0m31.856s 00:36:00.935 user 0m42.002s 00:36:00.935 sys 0m12.420s 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:00.935 ************************************ 00:36:00.935 END TEST nvmf_zcopy 00:36:00.935 ************************************ 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:00.935 ************************************ 00:36:00.935 START TEST nvmf_nmic 00:36:00.935 ************************************ 00:36:00.935 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:00.935 * Looking for test storage... 00:36:00.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:00.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.936 --rc genhtml_branch_coverage=1 00:36:00.936 --rc genhtml_function_coverage=1 00:36:00.936 --rc genhtml_legend=1 00:36:00.936 --rc geninfo_all_blocks=1 00:36:00.936 --rc geninfo_unexecuted_blocks=1 00:36:00.936 00:36:00.936 ' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:00.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.936 --rc genhtml_branch_coverage=1 00:36:00.936 --rc genhtml_function_coverage=1 00:36:00.936 --rc genhtml_legend=1 00:36:00.936 --rc geninfo_all_blocks=1 00:36:00.936 --rc geninfo_unexecuted_blocks=1 00:36:00.936 00:36:00.936 ' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:00.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.936 --rc genhtml_branch_coverage=1 00:36:00.936 --rc genhtml_function_coverage=1 00:36:00.936 --rc genhtml_legend=1 00:36:00.936 --rc geninfo_all_blocks=1 00:36:00.936 --rc geninfo_unexecuted_blocks=1 00:36:00.936 00:36:00.936 ' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:00.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.936 --rc genhtml_branch_coverage=1 00:36:00.936 --rc genhtml_function_coverage=1 00:36:00.936 --rc genhtml_legend=1 00:36:00.936 --rc geninfo_all_blocks=1 00:36:00.936 --rc geninfo_unexecuted_blocks=1 00:36:00.936 00:36:00.936 ' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.936 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.937 12:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.210 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:06.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:06.211 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:06.211 Found net devices under 0000:af:00.0: cvl_0_0 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:06.211 Found net devices under 0000:af:00.1: cvl_0_1 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.211 12:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:36:06.211 00:36:06.211 --- 10.0.0.2 ping statistics --- 00:36:06.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.211 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:36:06.211 00:36:06.211 --- 10.0.0.1 ping statistics --- 00:36:06.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.211 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:06.211 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=425739 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 425739 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 425739 ']' 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:06.212 [2024-11-06 12:42:37.311380] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:06.212 [2024-11-06 12:42:37.312731] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:36:06.212 [2024-11-06 12:42:37.312775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.212 [2024-11-06 12:42:37.413319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.212 [2024-11-06 12:42:37.464438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.212 [2024-11-06 12:42:37.464491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.212 [2024-11-06 12:42:37.464501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.212 [2024-11-06 12:42:37.464510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.212 [2024-11-06 12:42:37.464518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.212 [2024-11-06 12:42:37.466442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.212 [2024-11-06 12:42:37.466538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.212 [2024-11-06 12:42:37.466575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:06.212 [2024-11-06 12:42:37.466579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.212 [2024-11-06 12:42:37.540365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:06.212 [2024-11-06 12:42:37.540610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:06.212 [2024-11-06 12:42:37.540800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:06.212 [2024-11-06 12:42:37.541177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.212 [2024-11-06 12:42:37.541433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 [2024-11-06 12:42:37.607306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 Malloc0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 [2024-11-06 12:42:37.671588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:06.212 test case1: single bdev can't be used in multiple subsystems 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.212 [2024-11-06 12:42:37.695083] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:06.212 [2024-11-06 12:42:37.695110] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:06.212 [2024-11-06 12:42:37.695121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:06.212 request: 00:36:06.212 { 00:36:06.212 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:06.212 "namespace": { 00:36:06.212 "bdev_name": "Malloc0", 00:36:06.212 "no_auto_visible": false 00:36:06.212 }, 00:36:06.212 "method": "nvmf_subsystem_add_ns", 00:36:06.212 "req_id": 1 00:36:06.212 } 00:36:06.212 Got JSON-RPC error response 00:36:06.212 response: 00:36:06.212 { 00:36:06.212 "code": -32602, 00:36:06.212 "message": "Invalid parameters" 00:36:06.212 } 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:06.212 Adding namespace failed - expected result. 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:06.212 test case2: host connect to nvmf target in multiple paths 00:36:06.212 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:06.213 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.213 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:06.213 [2024-11-06 12:42:37.703200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:06.213 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.213 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:06.472 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:06.731 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:06.731 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:36:06.731 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:06.731 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:36:06.731 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:36:08.632 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:08.632 [global] 00:36:08.632 thread=1 00:36:08.632 invalidate=1 00:36:08.632 rw=write 00:36:08.632 time_based=1 00:36:08.632 runtime=1 00:36:08.632 ioengine=libaio 00:36:08.632 direct=1 00:36:08.632 bs=4096 00:36:08.632 iodepth=1 00:36:08.632 norandommap=0 00:36:08.632 numjobs=1 00:36:08.632 00:36:08.632 verify_dump=1 00:36:08.632 verify_backlog=512 00:36:08.632 verify_state_save=0 00:36:08.632 do_verify=1 00:36:08.632 verify=crc32c-intel 00:36:08.632 [job0] 00:36:08.632 filename=/dev/nvme0n1 00:36:08.909 Could not set queue depth (nvme0n1) 00:36:09.167 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:09.167 fio-3.35 00:36:09.167 Starting 1 thread 00:36:10.538 00:36:10.538 job0: (groupid=0, jobs=1): err= 0: pid=426508: Wed Nov 6 12:42:41 2024 00:36:10.538 read: IOPS=23, BW=93.8KiB/s (96.1kB/s)(96.0KiB/1023msec) 00:36:10.538 slat (nsec): min=9880, max=25696, avg=21366.71, stdev=3458.05 00:36:10.538 clat (usec): min=544, max=41303, avg=39280.72, stdev=8251.59 00:36:10.538 lat (usec): min=570, max=41314, avg=39302.09, stdev=8250.67 00:36:10.538 clat percentiles (usec): 00:36:10.538 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:10.538 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:10.538 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.538 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:10.538 | 99.99th=[41157] 00:36:10.538 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:36:10.538 slat (nsec): min=9750, max=39959, avg=10673.53, stdev=1468.62 00:36:10.538 clat (usec): min=133, max=408, avg=142.61, stdev=13.41 00:36:10.538 lat (usec): min=143, max=448, avg=153.28, stdev=14.57 00:36:10.538 clat percentiles (usec): 00:36:10.538 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:36:10.538 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:36:10.538 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 147], 95.00th=[ 151], 00:36:10.538 | 99.00th=[ 167], 99.50th=[ 186], 99.90th=[ 408], 99.95th=[ 408], 00:36:10.538 | 99.99th=[ 408] 00:36:10.538 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.538 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.538 lat (usec) : 250=95.34%, 500=0.19%, 750=0.19% 00:36:10.538 lat (msec) : 50=4.29% 00:36:10.538 cpu : usr=0.10%, sys=0.68%, ctx=536, majf=0, minf=1 00:36:10.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.538 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.538 00:36:10.538 Run status group 0 (all jobs): 00:36:10.538 READ: bw=93.8KiB/s (96.1kB/s), 93.8KiB/s-93.8KiB/s (96.1kB/s-96.1kB/s), io=96.0KiB (98.3kB), run=1023-1023msec 00:36:10.538 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:36:10.538 00:36:10.538 Disk stats (read/write): 00:36:10.538 nvme0n1: ios=70/512, merge=0/0, ticks=793/73, in_queue=866, util=91.58% 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:10.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.538 rmmod nvme_tcp 00:36:10.538 rmmod nvme_fabrics 00:36:10.538 rmmod nvme_keyring 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 425739 ']' 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 425739 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 425739 ']' 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 425739 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:10.538 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 425739 00:36:10.539 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:10.539 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:10.539 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 425739' 00:36:10.539 killing process with pid 425739 00:36:10.539 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 425739 00:36:10.539 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 425739 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.796 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:13.328 00:36:13.328 real 0m12.222s 00:36:13.328 user 0m29.093s 00:36:13.328 sys 0m5.285s 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:13.328 ************************************ 00:36:13.328 END TEST nvmf_nmic 00:36:13.328 ************************************ 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:13.328 ************************************ 00:36:13.328 START TEST nvmf_fio_target 00:36:13.328 ************************************ 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:13.328 * Looking for test storage... 00:36:13.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.328 --rc genhtml_branch_coverage=1 00:36:13.328 --rc genhtml_function_coverage=1 00:36:13.328 --rc genhtml_legend=1 00:36:13.328 --rc geninfo_all_blocks=1 00:36:13.328 --rc geninfo_unexecuted_blocks=1 00:36:13.328 00:36:13.328 ' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.328 --rc genhtml_branch_coverage=1 00:36:13.328 --rc genhtml_function_coverage=1 00:36:13.328 --rc genhtml_legend=1 00:36:13.328 --rc geninfo_all_blocks=1 00:36:13.328 --rc geninfo_unexecuted_blocks=1 00:36:13.328 00:36:13.328 ' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.328 --rc genhtml_branch_coverage=1 00:36:13.328 --rc genhtml_function_coverage=1 00:36:13.328 --rc genhtml_legend=1 00:36:13.328 --rc geninfo_all_blocks=1 00:36:13.328 --rc geninfo_unexecuted_blocks=1 00:36:13.328 00:36:13.328 ' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.328 --rc genhtml_branch_coverage=1 00:36:13.328 --rc genhtml_function_coverage=1 00:36:13.328 --rc genhtml_legend=1 00:36:13.328 --rc geninfo_all_blocks=1 00:36:13.328 --rc geninfo_unexecuted_blocks=1 00:36:13.328 00:36:13.328 ' 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.328 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:13.329 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:18.588 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:18.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:18.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:18.589 Found net devices under 0000:af:00.0: cvl_0_0 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:18.589 Found net devices under 0000:af:00.1: cvl_0_1 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.589 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:18.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:36:18.847 00:36:18.847 --- 10.0.0.2 ping statistics --- 00:36:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.847 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:36:18.847 00:36:18.847 --- 10.0.0.1 ping statistics --- 00:36:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.847 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:18.847 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=430256 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 430256 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 430256 ']' 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:19.104 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.104 [2024-11-06 12:42:50.520970] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.104 [2024-11-06 12:42:50.522299] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:36:19.104 [2024-11-06 12:42:50.522342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.104 [2024-11-06 12:42:50.623556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:19.104 [2024-11-06 12:42:50.672995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.104 [2024-11-06 12:42:50.673041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.104 [2024-11-06 12:42:50.673052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.104 [2024-11-06 12:42:50.673061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.104 [2024-11-06 12:42:50.673068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.104 [2024-11-06 12:42:50.674996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.104 [2024-11-06 12:42:50.675102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.104 [2024-11-06 12:42:50.675115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.104 [2024-11-06 12:42:50.675120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.362 [2024-11-06 12:42:50.748822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.362 [2024-11-06 12:42:50.748966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.362 [2024-11-06 12:42:50.749152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.362 [2024-11-06 12:42:50.749616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.362 [2024-11-06 12:42:50.749869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.929 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:20.185 [2024-11-06 12:42:51.691886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.185 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.442 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:20.442 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.006 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:21.006 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.006 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:21.006 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:21.571 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:21.571 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:21.828 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.085 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:22.085 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.342 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:22.342 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.598 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:22.598 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:22.855 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:23.111 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:23.112 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:23.368 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:23.368 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:23.625 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.882 [2024-11-06 12:42:55.295911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.882 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:24.139 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:24.396 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:36:24.653 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:36:26.547 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:26.547 [global] 00:36:26.547 thread=1 00:36:26.547 invalidate=1 00:36:26.547 rw=write 00:36:26.547 time_based=1 00:36:26.547 runtime=1 00:36:26.547 ioengine=libaio 00:36:26.547 direct=1 00:36:26.547 bs=4096 00:36:26.547 iodepth=1 00:36:26.547 norandommap=0 00:36:26.547 numjobs=1 00:36:26.547 00:36:26.824 verify_dump=1 00:36:26.824 verify_backlog=512 00:36:26.824 verify_state_save=0 00:36:26.824 do_verify=1 00:36:26.824 verify=crc32c-intel 00:36:26.824 [job0] 00:36:26.824 filename=/dev/nvme0n1 00:36:26.824 [job1] 00:36:26.824 filename=/dev/nvme0n2 00:36:26.824 [job2] 00:36:26.824 filename=/dev/nvme0n3 00:36:26.824 [job3] 00:36:26.824 filename=/dev/nvme0n4 00:36:26.824 Could not set queue depth (nvme0n1) 00:36:26.824 Could not set queue depth (nvme0n2) 00:36:26.824 Could not set queue depth (nvme0n3) 00:36:26.824 Could not set queue depth (nvme0n4) 00:36:27.092 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.092 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.092 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.092 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.092 fio-3.35 00:36:27.092 Starting 4 threads 00:36:28.490 00:36:28.490 job0: (groupid=0, jobs=1): err= 0: pid=431793: Wed Nov 6 12:42:59 2024 00:36:28.490 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:36:28.490 slat (nsec): min=6843, max=21925, avg=8105.81, stdev=1070.51 00:36:28.490 clat (usec): min=228, max=514, avg=270.34, stdev=17.20 00:36:28.490 lat (usec): min=236, max=523, avg=278.45, stdev=17.02 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:36:28.490 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:36:28.490 | 70.00th=[ 281], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 293], 00:36:28.490 | 99.00th=[ 302], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 482], 00:36:28.490 | 99.99th=[ 515] 00:36:28.490 write: IOPS=2092, BW=8372KiB/s (8573kB/s)(8380KiB/1001msec); 0 zone resets 00:36:28.490 slat (nsec): min=9958, max=49003, avg=11521.99, stdev=2162.15 00:36:28.490 clat (usec): min=135, max=1953, avg=187.54, stdev=42.96 00:36:28.490 lat (usec): min=146, max=1967, avg=199.06, stdev=43.09 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:36:28.490 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:36:28.490 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 217], 00:36:28.490 | 99.00th=[ 233], 99.50th=[ 245], 99.90th=[ 355], 99.95th=[ 515], 00:36:28.490 | 99.99th=[ 1958] 00:36:28.490 bw ( KiB/s): min= 8744, max= 8744, per=53.82%, avg=8744.00, stdev= 0.00, samples=1 00:36:28.490 iops : min= 2186, max= 2186, avg=2186.00, stdev= 0.00, samples=1 00:36:28.490 lat (usec) : 250=58.75%, 500=41.18%, 750=0.05% 00:36:28.490 lat (msec) : 2=0.02% 00:36:28.490 cpu : usr=3.10%, sys=6.80%, ctx=4143, majf=0, minf=1 00:36:28.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 issued rwts: total=2048,2095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.490 job1: (groupid=0, jobs=1): err= 0: pid=431800: Wed Nov 6 12:42:59 2024 00:36:28.490 read: IOPS=518, BW=2075KiB/s (2124kB/s)(2116KiB/1020msec) 00:36:28.490 slat (nsec): min=8523, max=24305, avg=9825.25, stdev=2261.87 00:36:28.490 clat (usec): min=209, max=41149, avg=1552.54, stdev=7186.65 00:36:28.490 lat (usec): min=219, max=41159, avg=1562.36, stdev=7188.48 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 239], 00:36:28.490 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:36:28.490 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 260], 00:36:28.490 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:28.490 | 99.99th=[41157] 00:36:28.490 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:36:28.490 slat (nsec): min=9985, max=36884, avg=12300.12, stdev=1961.31 00:36:28.490 clat (usec): min=135, max=1667, avg=171.71, stdev=50.38 00:36:28.490 lat (usec): min=146, max=1679, avg=184.01, stdev=50.63 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:36:28.490 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:36:28.490 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 206], 00:36:28.490 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 1663], 00:36:28.490 | 99.99th=[ 1663] 00:36:28.490 bw ( KiB/s): min= 8192, max= 8192, per=50.42%, avg=8192.00, stdev= 0.00, samples=1 00:36:28.490 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:28.490 lat (usec) : 250=92.98%, 500=5.86% 00:36:28.490 lat (msec) : 2=0.06%, 50=1.09% 00:36:28.490 cpu : usr=1.47%, sys=2.45%, ctx=1553, majf=0, minf=1 00:36:28.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.490 job2: (groupid=0, jobs=1): err= 0: pid=431820: Wed Nov 6 12:42:59 2024 00:36:28.490 read: IOPS=439, BW=1759KiB/s (1802kB/s)(1784KiB/1014msec) 00:36:28.490 slat (nsec): min=7565, max=30538, avg=9077.68, stdev=1574.15 00:36:28.490 clat (usec): min=284, max=41974, avg=1965.29, stdev=8027.46 00:36:28.490 lat (usec): min=293, max=41986, avg=1974.37, stdev=8028.30 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:36:28.490 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 326], 00:36:28.490 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 367], 00:36:28.490 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:36:28.490 | 99.99th=[42206] 00:36:28.490 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:36:28.490 slat (usec): min=10, max=143, avg=12.83, stdev= 6.09 00:36:28.490 clat (usec): min=202, max=290, avg=236.04, stdev=14.33 00:36:28.490 lat (usec): min=214, max=393, avg=248.87, stdev=15.95 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:36:28.490 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:36:28.490 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 265], 00:36:28.490 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 289], 00:36:28.490 | 99.99th=[ 289] 00:36:28.490 bw ( KiB/s): min= 4096, max= 4096, per=25.21%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.490 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.490 lat (usec) : 250=46.03%, 500=52.09% 00:36:28.490 lat (msec) : 50=1.88% 00:36:28.490 cpu : usr=0.99%, sys=1.38%, ctx=960, majf=0, minf=1 00:36:28.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.490 issued rwts: total=446,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.490 job3: (groupid=0, jobs=1): err= 0: pid=431828: Wed Nov 6 12:42:59 2024 00:36:28.490 read: IOPS=24, BW=99.6KiB/s (102kB/s)(100KiB/1004msec) 00:36:28.490 slat (nsec): min=7476, max=24197, avg=20446.68, stdev=5252.01 00:36:28.490 clat (usec): min=230, max=41970, avg=36100.27, stdev=13502.99 00:36:28.490 lat (usec): min=238, max=41994, avg=36120.72, stdev=13503.89 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 231], 5.00th=[ 277], 10.00th=[ 326], 20.00th=[40633], 00:36:28.490 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:28.490 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:28.490 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:28.490 | 99.99th=[42206] 00:36:28.490 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:36:28.490 slat (nsec): min=9819, max=38605, avg=13251.35, stdev=2998.12 00:36:28.490 clat (usec): min=142, max=477, avg=181.03, stdev=22.50 00:36:28.490 lat (usec): min=156, max=491, avg=194.28, stdev=23.56 00:36:28.490 clat percentiles (usec): 00:36:28.490 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:36:28.491 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:36:28.491 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 206], 00:36:28.491 | 99.00th=[ 227], 99.50th=[ 322], 99.90th=[ 478], 99.95th=[ 478], 00:36:28.491 | 99.99th=[ 478] 00:36:28.491 bw ( KiB/s): min= 4096, max= 4096, per=25.21%, avg=4096.00, stdev= 0.00, samples=1 00:36:28.491 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:28.491 lat (usec) : 250=94.60%, 500=1.30% 00:36:28.491 lat (msec) : 50=4.10% 00:36:28.491 cpu : usr=0.20%, sys=0.80%, ctx=537, majf=0, minf=2 00:36:28.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.491 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:28.491 00:36:28.491 Run status group 0 (all jobs): 00:36:28.491 READ: bw=11.7MiB/s (12.2MB/s), 99.6KiB/s-8184KiB/s (102kB/s-8380kB/s), io=11.9MiB (12.5MB), run=1001-1020msec 00:36:28.491 WRITE: bw=15.9MiB/s (16.6MB/s), 2020KiB/s-8372KiB/s (2068kB/s-8573kB/s), io=16.2MiB (17.0MB), run=1001-1020msec 00:36:28.491 00:36:28.491 Disk stats (read/write): 00:36:28.491 nvme0n1: ios=1586/1962, merge=0/0, ticks=418/357, in_queue=775, util=84.27% 00:36:28.491 nvme0n2: ios=527/1024, merge=0/0, ticks=732/172, in_queue=904, util=88.51% 00:36:28.491 nvme0n3: ios=496/512, merge=0/0, ticks=1024/113, in_queue=1137, util=96.26% 00:36:28.491 nvme0n4: ios=20/512, merge=0/0, ticks=698/91, in_queue=789, util=89.41% 00:36:28.491 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:28.491 [global] 00:36:28.491 thread=1 00:36:28.491 invalidate=1 00:36:28.491 rw=randwrite 00:36:28.491 time_based=1 00:36:28.491 runtime=1 00:36:28.491 ioengine=libaio 00:36:28.491 direct=1 00:36:28.491 bs=4096 00:36:28.491 iodepth=1 00:36:28.491 norandommap=0 00:36:28.491 numjobs=1 00:36:28.491 00:36:28.491 verify_dump=1 00:36:28.491 verify_backlog=512 00:36:28.491 verify_state_save=0 00:36:28.491 do_verify=1 00:36:28.491 verify=crc32c-intel 00:36:28.491 [job0] 00:36:28.491 filename=/dev/nvme0n1 00:36:28.491 [job1] 00:36:28.491 filename=/dev/nvme0n2 00:36:28.491 [job2] 00:36:28.491 filename=/dev/nvme0n3 00:36:28.491 [job3] 00:36:28.491 filename=/dev/nvme0n4 00:36:28.491 Could not set queue depth (nvme0n1) 00:36:28.491 Could not set queue depth (nvme0n2) 00:36:28.491 Could not set queue depth (nvme0n3) 00:36:28.491 Could not set queue depth (nvme0n4) 00:36:28.755 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:28.755 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:28.755 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:28.755 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:28.755 fio-3.35 00:36:28.755 Starting 4 threads 00:36:30.151 00:36:30.151 job0: (groupid=0, jobs=1): err= 0: pid=432240: Wed Nov 6 12:43:01 2024 00:36:30.151 read: IOPS=1015, BW=4063KiB/s (4161kB/s)(4128KiB/1016msec) 00:36:30.151 slat (nsec): min=6516, max=33164, avg=7812.61, stdev=2054.32 00:36:30.151 clat (usec): min=208, max=41898, avg=620.68, stdev=3592.05 00:36:30.151 lat (usec): min=215, max=41921, avg=628.49, stdev=3592.88 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 235], 00:36:30.151 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:36:30.151 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 359], 00:36:30.151 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:30.151 | 99.99th=[41681] 00:36:30.151 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:36:30.151 slat (nsec): min=9293, max=42901, avg=10401.75, stdev=1610.70 00:36:30.151 clat (usec): min=143, max=524, avg=224.35, stdev=30.41 00:36:30.151 lat (usec): min=156, max=534, avg=234.75, stdev=30.59 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 202], 00:36:30.151 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:36:30.151 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 285], 00:36:30.151 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 396], 99.95th=[ 529], 00:36:30.151 | 99.99th=[ 529] 00:36:30.151 bw ( KiB/s): min= 4096, max= 8192, per=27.68%, avg=6144.00, stdev=2896.31, samples=2 00:36:30.151 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:36:30.151 lat (usec) : 250=59.89%, 500=39.37%, 750=0.43% 00:36:30.151 lat (msec) : 50=0.31% 00:36:30.151 cpu : usr=0.89%, sys=2.76%, ctx=2572, majf=0, minf=1 00:36:30.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.151 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.151 job1: (groupid=0, jobs=1): err= 0: pid=432252: Wed Nov 6 12:43:01 2024 00:36:30.151 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4144KiB/1041msec) 00:36:30.151 slat (nsec): min=7021, max=48948, avg=8317.00, stdev=2191.44 00:36:30.151 clat (usec): min=212, max=41009, avg=720.08, stdev=4357.51 00:36:30.151 lat (usec): min=232, max=41032, avg=728.40, stdev=4358.91 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 239], 00:36:30.151 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:36:30.151 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:36:30.151 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:30.151 | 99.99th=[41157] 00:36:30.151 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:36:30.151 slat (nsec): min=9894, max=44515, avg=11233.66, stdev=2160.49 00:36:30.151 clat (usec): min=131, max=388, avg=170.16, stdev=18.38 00:36:30.151 lat (usec): min=151, max=428, avg=181.40, stdev=19.10 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:36:30.151 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:36:30.151 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 198], 00:36:30.151 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 318], 99.95th=[ 388], 00:36:30.151 | 99.99th=[ 388] 00:36:30.151 bw ( KiB/s): min= 2136, max=10152, per=27.68%, avg=6144.00, stdev=5668.17, samples=2 00:36:30.151 iops : min= 534, max= 2538, avg=1536.00, stdev=1417.04, samples=2 00:36:30.151 lat (usec) : 250=83.59%, 500=15.94% 00:36:30.151 lat (msec) : 50=0.47% 00:36:30.151 cpu : usr=1.83%, sys=4.04%, ctx=2572, majf=0, minf=2 00:36:30.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.151 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.151 job2: (groupid=0, jobs=1): err= 0: pid=432273: Wed Nov 6 12:43:01 2024 00:36:30.151 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:36:30.151 slat (nsec): min=9984, max=23667, avg=22546.18, stdev=2845.85 00:36:30.151 clat (usec): min=40801, max=42067, avg=41088.01, stdev=323.70 00:36:30.151 lat (usec): min=40824, max=42091, avg=41110.56, stdev=322.87 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:36:30.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:30.151 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:36:30.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:30.151 | 99.99th=[42206] 00:36:30.151 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:36:30.151 slat (nsec): min=9487, max=39589, avg=11765.09, stdev=4310.32 00:36:30.151 clat (usec): min=138, max=524, avg=238.75, stdev=56.03 00:36:30.151 lat (usec): min=163, max=534, avg=250.51, stdev=55.08 00:36:30.151 clat percentiles (usec): 00:36:30.151 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 180], 00:36:30.151 | 30.00th=[ 190], 40.00th=[ 221], 50.00th=[ 251], 60.00th=[ 265], 00:36:30.151 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 314], 00:36:30.151 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 529], 99.95th=[ 529], 00:36:30.151 | 99.99th=[ 529] 00:36:30.151 bw ( KiB/s): min= 4096, max= 4096, per=18.46%, avg=4096.00, stdev= 0.00, samples=1 00:36:30.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:30.152 lat (usec) : 250=46.82%, 500=48.88%, 750=0.19% 00:36:30.152 lat (msec) : 50=4.12% 00:36:30.152 cpu : usr=0.19%, sys=0.58%, ctx=534, majf=0, minf=1 00:36:30.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.152 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.152 job3: (groupid=0, jobs=1): err= 0: pid=432280: Wed Nov 6 12:43:01 2024 00:36:30.152 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:36:30.152 slat (nsec): min=7549, max=46052, avg=8857.38, stdev=1566.86 00:36:30.152 clat (usec): min=213, max=1692, avg=265.34, stdev=40.16 00:36:30.152 lat (usec): min=221, max=1700, avg=274.20, stdev=40.21 00:36:30.152 clat percentiles (usec): 00:36:30.152 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 245], 00:36:30.152 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:36:30.152 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 306], 00:36:30.152 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 396], 00:36:30.152 | 99.99th=[ 1696] 00:36:30.152 write: IOPS=2189, BW=8759KiB/s (8969kB/s)(8768KiB/1001msec); 0 zone resets 00:36:30.152 slat (usec): min=10, max=200, avg=12.39, stdev= 4.87 00:36:30.152 clat (usec): min=136, max=3763, avg=181.01, stdev=82.11 00:36:30.152 lat (usec): min=148, max=3785, avg=193.40, stdev=82.70 00:36:30.152 clat percentiles (usec): 00:36:30.152 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 155], 00:36:30.152 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:36:30.152 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 233], 00:36:30.152 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 379], 99.95th=[ 453], 00:36:30.152 | 99.99th=[ 3752] 00:36:30.152 bw ( KiB/s): min= 9040, max= 9040, per=40.73%, avg=9040.00, stdev= 0.00, samples=1 00:36:30.152 iops : min= 2260, max= 2260, avg=2260.00, stdev= 0.00, samples=1 00:36:30.152 lat (usec) : 250=65.07%, 500=34.88% 00:36:30.152 lat (msec) : 2=0.02%, 4=0.02% 00:36:30.152 cpu : usr=3.00%, sys=7.60%, ctx=4242, majf=0, minf=1 00:36:30.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.152 issued rwts: total=2048,2192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.152 00:36:30.152 Run status group 0 (all jobs): 00:36:30.152 READ: bw=15.5MiB/s (16.3MB/s), 85.1KiB/s-8184KiB/s (87.1kB/s-8380kB/s), io=16.2MiB (16.9MB), run=1001-1041msec 00:36:30.152 WRITE: bw=21.7MiB/s (22.7MB/s), 1981KiB/s-8759KiB/s (2028kB/s-8969kB/s), io=22.6MiB (23.7MB), run=1001-1041msec 00:36:30.152 00:36:30.152 Disk stats (read/write): 00:36:30.152 nvme0n1: ios=1056/1536, merge=0/0, ticks=1189/336, in_queue=1525, util=98.50% 00:36:30.152 nvme0n2: ios=1070/1536, merge=0/0, ticks=546/251, in_queue=797, util=88.75% 00:36:30.152 nvme0n3: ios=16/512, merge=0/0, ticks=658/119, in_queue=777, util=88.18% 00:36:30.152 nvme0n4: ios=1633/2048, merge=0/0, ticks=1184/333, in_queue=1517, util=99.78% 00:36:30.152 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:30.152 [global] 00:36:30.152 thread=1 00:36:30.152 invalidate=1 00:36:30.152 rw=write 00:36:30.152 time_based=1 00:36:30.152 runtime=1 00:36:30.152 ioengine=libaio 00:36:30.152 direct=1 00:36:30.152 bs=4096 00:36:30.152 iodepth=128 00:36:30.152 norandommap=0 00:36:30.152 numjobs=1 00:36:30.152 00:36:30.152 verify_dump=1 00:36:30.152 verify_backlog=512 00:36:30.152 verify_state_save=0 00:36:30.152 do_verify=1 00:36:30.152 verify=crc32c-intel 00:36:30.152 [job0] 00:36:30.152 filename=/dev/nvme0n1 00:36:30.152 [job1] 00:36:30.152 filename=/dev/nvme0n2 00:36:30.152 [job2] 00:36:30.152 filename=/dev/nvme0n3 00:36:30.152 [job3] 00:36:30.152 filename=/dev/nvme0n4 00:36:30.152 Could not set queue depth (nvme0n1) 00:36:30.152 Could not set queue depth (nvme0n2) 00:36:30.152 Could not set queue depth (nvme0n3) 00:36:30.152 Could not set queue depth (nvme0n4) 00:36:30.416 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.416 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.416 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.416 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:30.416 fio-3.35 00:36:30.416 Starting 4 threads 00:36:31.814 00:36:31.814 job0: (groupid=0, jobs=1): err= 0: pid=432673: Wed Nov 6 12:43:03 2024 00:36:31.814 read: IOPS=2072, BW=8291KiB/s (8490kB/s)(8324KiB/1004msec) 00:36:31.814 slat (usec): min=3, max=28444, avg=240.53, stdev=1698.41 00:36:31.814 clat (usec): min=1177, max=77819, avg=29784.62, stdev=21531.78 00:36:31.814 lat (usec): min=4923, max=77827, avg=30025.15, stdev=21630.24 00:36:31.814 clat percentiles (usec): 00:36:31.814 | 1.00th=[ 5080], 5.00th=[10945], 10.00th=[11076], 20.00th=[11338], 00:36:31.814 | 30.00th=[11731], 40.00th=[14746], 50.00th=[17957], 60.00th=[25035], 00:36:31.814 | 70.00th=[43779], 80.00th=[54789], 90.00th=[66323], 95.00th=[70779], 00:36:31.814 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:36:31.814 | 99.99th=[78119] 00:36:31.814 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:36:31.814 slat (usec): min=3, max=24807, avg=188.67, stdev=1385.20 00:36:31.814 clat (usec): min=8427, max=76733, avg=25392.75, stdev=17657.63 00:36:31.814 lat (usec): min=10788, max=76741, avg=25581.43, stdev=17728.44 00:36:31.814 clat percentiles (usec): 00:36:31.814 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11076], 20.00th=[11469], 00:36:31.814 | 30.00th=[12256], 40.00th=[14353], 50.00th=[14746], 60.00th=[18220], 00:36:31.814 | 70.00th=[34341], 80.00th=[43254], 90.00th=[53216], 95.00th=[59507], 00:36:31.814 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:36:31.814 | 99.99th=[77071] 00:36:31.814 bw ( KiB/s): min= 8192, max=11528, per=13.98%, avg=9860.00, stdev=2358.91, samples=2 00:36:31.814 iops : min= 2048, max= 2882, avg=2465.00, stdev=589.73, samples=2 00:36:31.814 lat (msec) : 2=0.02%, 10=1.72%, 20=55.59%, 50=24.18%, 100=18.49% 00:36:31.814 cpu : usr=2.69%, sys=3.29%, ctx=157, majf=0, minf=1 00:36:31.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:36:31.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:31.814 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:31.814 job1: (groupid=0, jobs=1): err= 0: pid=432684: Wed Nov 6 12:43:03 2024 00:36:31.814 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:36:31.814 slat (nsec): min=1681, max=43712k, avg=108665.50, stdev=936847.48 00:36:31.814 clat (usec): min=7042, max=52697, avg=14602.29, stdev=7794.81 00:36:31.814 lat (usec): min=7050, max=52701, avg=14710.96, stdev=7837.54 00:36:31.814 clat percentiles (usec): 00:36:31.814 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:36:31.814 | 30.00th=[11207], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:36:31.814 | 70.00th=[14222], 80.00th=[16909], 90.00th=[21365], 95.00th=[30016], 00:36:31.814 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:36:31.814 | 99.99th=[52691] 00:36:31.814 write: IOPS=4676, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1003msec); 0 zone resets 00:36:31.814 slat (nsec): min=1900, max=18171k, avg=99068.41, stdev=727753.41 00:36:31.814 clat (usec): min=2620, max=37317, avg=12514.80, stdev=4235.70 00:36:31.814 lat (usec): min=2626, max=37357, avg=12613.86, stdev=4283.64 00:36:31.814 clat percentiles (usec): 00:36:31.814 | 1.00th=[ 3425], 5.00th=[ 7570], 10.00th=[ 8717], 20.00th=[ 9241], 00:36:31.814 | 30.00th=[10552], 40.00th=[12387], 50.00th=[12911], 60.00th=[13042], 00:36:31.815 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[21890], 00:36:31.815 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30540], 00:36:31.815 | 99.99th=[37487] 00:36:31.815 bw ( KiB/s): min=18156, max=18744, per=26.16%, avg=18450.00, stdev=415.78, samples=2 00:36:31.815 iops : min= 4539, max= 4686, avg=4612.50, stdev=103.94, samples=2 00:36:31.815 lat (msec) : 4=0.74%, 10=20.55%, 20=70.46%, 50=6.89%, 100=1.35% 00:36:31.815 cpu : usr=3.59%, sys=6.89%, ctx=279, majf=0, minf=1 00:36:31.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:31.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:31.815 issued rwts: total=4608,4691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:31.815 job2: (groupid=0, jobs=1): err= 0: pid=432704: Wed Nov 6 12:43:03 2024 00:36:31.815 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:36:31.815 slat (usec): min=2, max=12708, avg=108.94, stdev=825.79 00:36:31.815 clat (usec): min=5119, max=37641, avg=12991.55, stdev=4570.83 00:36:31.815 lat (usec): min=5123, max=37648, avg=13100.49, stdev=4639.47 00:36:31.815 clat percentiles (usec): 00:36:31.815 | 1.00th=[ 6456], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9896], 00:36:31.815 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12256], 60.00th=[12911], 00:36:31.815 | 70.00th=[13960], 80.00th=[14484], 90.00th=[16909], 95.00th=[22676], 00:36:31.815 | 99.00th=[33162], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:36:31.815 | 99.99th=[37487] 00:36:31.815 write: IOPS=4675, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1008msec); 0 zone resets 00:36:31.815 slat (usec): min=3, max=12020, avg=99.86, stdev=633.73 00:36:31.815 clat (usec): min=1552, max=37646, avg=14425.40, stdev=6470.11 00:36:31.815 lat (usec): min=1566, max=37663, avg=14525.26, stdev=6523.78 00:36:31.815 clat percentiles (usec): 00:36:31.815 | 1.00th=[ 5604], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9372], 00:36:31.815 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[13304], 00:36:31.815 | 70.00th=[15401], 80.00th=[22414], 90.00th=[24249], 95.00th=[27657], 00:36:31.815 | 99.00th=[29230], 99.50th=[29230], 99.90th=[33817], 99.95th=[35390], 00:36:31.815 | 99.99th=[37487] 00:36:31.815 bw ( KiB/s): min=16384, max=20480, per=26.13%, avg=18432.00, stdev=2896.31, samples=2 00:36:31.815 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:36:31.815 lat (msec) : 2=0.02%, 4=0.34%, 10=23.27%, 20=60.74%, 50=15.62% 00:36:31.815 cpu : usr=4.07%, sys=5.36%, ctx=341, majf=0, minf=1 00:36:31.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:31.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:31.815 issued rwts: total=4608,4713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:31.815 job3: (groupid=0, jobs=1): err= 0: pid=432714: Wed Nov 6 12:43:03 2024 00:36:31.815 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:36:31.815 slat (nsec): min=1962, max=10249k, avg=86715.97, stdev=703538.95 00:36:31.815 clat (usec): min=5066, max=39833, avg=11050.93, stdev=3386.35 00:36:31.815 lat (usec): min=5074, max=39840, avg=11137.65, stdev=3443.95 00:36:31.815 clat percentiles (usec): 00:36:31.815 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 8979], 00:36:31.815 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:36:31.815 | 70.00th=[11338], 80.00th=[12256], 90.00th=[15139], 95.00th=[16581], 00:36:31.815 | 99.00th=[23725], 99.50th=[32113], 99.90th=[36439], 99.95th=[39584], 00:36:31.815 | 99.99th=[39584] 00:36:31.815 write: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(22.8MiB/1010msec); 0 zone resets 00:36:31.815 slat (usec): min=3, max=9725, avg=81.27, stdev=594.59 00:36:31.815 clat (usec): min=1430, max=39834, avg=11260.24, stdev=6558.06 00:36:31.815 lat (usec): min=1444, max=39842, avg=11341.50, stdev=6604.30 00:36:31.815 clat percentiles (usec): 00:36:31.815 | 1.00th=[ 4490], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7439], 00:36:31.815 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10159], 00:36:31.815 | 70.00th=[10552], 80.00th=[12911], 90.00th=[15533], 95.00th=[31589], 00:36:31.815 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:36:31.815 | 99.99th=[39584] 00:36:31.815 bw ( KiB/s): min=19328, max=26476, per=32.47%, avg=22902.00, stdev=5054.40, samples=2 00:36:31.815 iops : min= 4832, max= 6619, avg=5725.50, stdev=1263.60, samples=2 00:36:31.815 lat (msec) : 2=0.05%, 4=0.31%, 10=49.38%, 20=45.35%, 50=4.91% 00:36:31.815 cpu : usr=5.55%, sys=6.24%, ctx=315, majf=0, minf=2 00:36:31.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:31.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:31.815 issued rwts: total=5632,5846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:31.815 00:36:31.815 Run status group 0 (all jobs): 00:36:31.815 READ: bw=65.5MiB/s (68.7MB/s), 8291KiB/s-21.8MiB/s (8490kB/s-22.8MB/s), io=66.1MiB (69.3MB), run=1003-1010msec 00:36:31.815 WRITE: bw=68.9MiB/s (72.2MB/s), 9.96MiB/s-22.6MiB/s (10.4MB/s-23.7MB/s), io=69.6MiB (72.9MB), run=1003-1010msec 00:36:31.815 00:36:31.815 Disk stats (read/write): 00:36:31.815 nvme0n1: ios=1841/2048, merge=0/0, ticks=15816/10348, in_queue=26164, util=89.08% 00:36:31.815 nvme0n2: ios=3620/4094, merge=0/0, ticks=24898/21875, in_queue=46773, util=99.18% 00:36:31.815 nvme0n3: ios=3601/3943, merge=0/0, ticks=45605/56266, in_queue=101871, util=96.28% 00:36:31.815 nvme0n4: ios=4637/5031, merge=0/0, ticks=48005/51829, in_queue=99834, util=95.80% 00:36:31.815 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:31.815 [global] 00:36:31.815 thread=1 00:36:31.815 invalidate=1 00:36:31.815 rw=randwrite 00:36:31.815 time_based=1 00:36:31.815 runtime=1 00:36:31.815 ioengine=libaio 00:36:31.815 direct=1 00:36:31.815 bs=4096 00:36:31.815 iodepth=128 00:36:31.815 norandommap=0 00:36:31.815 numjobs=1 00:36:31.815 00:36:31.815 verify_dump=1 00:36:31.815 verify_backlog=512 00:36:31.815 verify_state_save=0 00:36:31.815 do_verify=1 00:36:31.815 verify=crc32c-intel 00:36:31.815 [job0] 00:36:31.815 filename=/dev/nvme0n1 00:36:31.815 [job1] 00:36:31.815 filename=/dev/nvme0n2 00:36:31.815 [job2] 00:36:31.815 filename=/dev/nvme0n3 00:36:31.815 [job3] 00:36:31.815 filename=/dev/nvme0n4 00:36:31.815 Could not set queue depth (nvme0n1) 00:36:31.815 Could not set queue depth (nvme0n2) 00:36:31.815 Could not set queue depth (nvme0n3) 00:36:31.815 Could not set queue depth (nvme0n4) 00:36:32.077 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.077 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.077 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.077 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.077 fio-3.35 00:36:32.077 Starting 4 threads 00:36:33.451 00:36:33.451 job0: (groupid=0, jobs=1): err= 0: pid=433113: Wed Nov 6 12:43:04 2024 00:36:33.451 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:36:33.451 slat (usec): min=2, max=9092, avg=72.65, stdev=625.12 00:36:33.451 clat (usec): min=3004, max=20857, avg=9715.64, stdev=2356.56 00:36:33.451 lat (usec): min=3016, max=22022, avg=9788.29, stdev=2419.38 00:36:33.451 clat percentiles (usec): 00:36:33.451 | 1.00th=[ 7177], 5.00th=[ 7570], 10.00th=[ 7767], 20.00th=[ 8291], 00:36:33.451 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:36:33.451 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[13566], 95.00th=[14877], 00:36:33.451 | 99.00th=[17695], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:36:33.451 | 99.99th=[20841] 00:36:33.451 write: IOPS=7049, BW=27.5MiB/s (28.9MB/s)(27.7MiB/1006msec); 0 zone resets 00:36:33.451 slat (usec): min=3, max=18121, avg=65.17, stdev=555.19 00:36:33.451 clat (usec): min=271, max=32007, avg=8850.72, stdev=2971.69 00:36:33.451 lat (usec): min=1248, max=32017, avg=8915.89, stdev=2995.97 00:36:33.451 clat percentiles (usec): 00:36:33.451 | 1.00th=[ 3326], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 6652], 00:36:33.451 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8848], 00:36:33.451 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[12387], 95.00th=[14746], 00:36:33.451 | 99.00th=[20841], 99.50th=[20841], 99.90th=[22152], 99.95th=[22414], 00:36:33.451 | 99.99th=[32113] 00:36:33.451 bw ( KiB/s): min=27792, max=27920, per=39.44%, avg=27856.00, stdev=90.51, samples=2 00:36:33.451 iops : min= 6948, max= 6980, avg=6964.00, stdev=22.63, samples=2 00:36:33.451 lat (usec) : 500=0.01% 00:36:33.451 lat (msec) : 2=0.04%, 4=0.71%, 10=77.92%, 20=20.73%, 50=0.60% 00:36:33.451 cpu : usr=5.77%, sys=8.06%, ctx=345, majf=0, minf=1 00:36:33.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:33.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:33.451 issued rwts: total=6656,7092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:33.451 job1: (groupid=0, jobs=1): err= 0: pid=433124: Wed Nov 6 12:43:04 2024 00:36:33.451 read: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1047msec) 00:36:33.451 slat (nsec): min=1969, max=6131.7k, avg=105521.60, stdev=630310.79 00:36:33.451 clat (usec): min=8487, max=47145, avg=13574.95, stdev=2573.46 00:36:33.451 lat (usec): min=8491, max=47154, avg=13680.48, stdev=2591.75 00:36:33.451 clat percentiles (usec): 00:36:33.451 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10814], 20.00th=[11731], 00:36:33.451 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13566], 60.00th=[14353], 00:36:33.451 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16319], 95.00th=[16909], 00:36:33.451 | 99.00th=[17957], 99.50th=[18744], 99.90th=[46924], 99.95th=[46924], 00:36:33.451 | 99.99th=[46924] 00:36:33.451 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1047msec); 0 zone resets 00:36:33.451 slat (usec): min=3, max=34326, avg=118.01, stdev=941.99 00:36:33.451 clat (usec): min=6724, max=75676, avg=16464.03, stdev=9953.15 00:36:33.451 lat (usec): min=6731, max=75706, avg=16582.04, stdev=10026.15 00:36:33.451 clat percentiles (usec): 00:36:33.451 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[12649], 20.00th=[13042], 00:36:33.451 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:36:33.451 | 70.00th=[13960], 80.00th=[14222], 90.00th=[18744], 95.00th=[47449], 00:36:33.451 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[68682], 00:36:33.451 | 99.99th=[76022] 00:36:33.451 bw ( KiB/s): min=16384, max=19520, per=25.42%, avg=17952.00, stdev=2217.49, samples=2 00:36:33.451 iops : min= 4096, max= 4880, avg=4488.00, stdev=554.37, samples=2 00:36:33.451 lat (msec) : 10=2.47%, 20=92.44%, 50=3.63%, 100=1.47% 00:36:33.451 cpu : usr=4.49%, sys=4.11%, ctx=402, majf=0, minf=1 00:36:33.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:33.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:33.451 issued rwts: total=4104,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:33.451 job2: (groupid=0, jobs=1): err= 0: pid=433145: Wed Nov 6 12:43:04 2024 00:36:33.451 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:36:33.451 slat (nsec): min=1927, max=26250k, avg=200565.18, stdev=1503205.77 00:36:33.451 clat (usec): min=12365, max=72109, avg=27559.74, stdev=11496.41 00:36:33.451 lat (usec): min=12372, max=77157, avg=27760.31, stdev=11629.86 00:36:33.451 clat percentiles (usec): 00:36:33.451 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12649], 20.00th=[13042], 00:36:33.451 | 30.00th=[20055], 40.00th=[22938], 50.00th=[29754], 60.00th=[31589], 00:36:33.451 | 70.00th=[33817], 80.00th=[35914], 90.00th=[41157], 95.00th=[47449], 00:36:33.451 | 99.00th=[55313], 99.50th=[65799], 99.90th=[71828], 99.95th=[71828], 00:36:33.451 | 99.99th=[71828] 00:36:33.451 write: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(9.90MiB/1007msec); 0 zone resets 00:36:33.451 slat (usec): min=3, max=24483, avg=228.81, stdev=1441.14 00:36:33.451 clat (usec): min=1103, max=105158, avg=28110.25, stdev=17267.49 00:36:33.452 lat (msec): min=6, max=105, avg=28.34, stdev=17.39 00:36:33.452 clat percentiles (msec): 00:36:33.452 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 18], 00:36:33.452 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 22], 60.00th=[ 24], 00:36:33.452 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 48], 95.00th=[ 57], 00:36:33.452 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 106], 99.95th=[ 106], 00:36:33.452 | 99.99th=[ 106] 00:36:33.452 bw ( KiB/s): min= 9528, max= 9720, per=13.63%, avg=9624.00, stdev=135.76, samples=2 00:36:33.452 iops : min= 2382, max= 2430, avg=2406.00, stdev=33.94, samples=2 00:36:33.452 lat (msec) : 2=0.02%, 10=1.37%, 20=27.30%, 50=66.54%, 100=4.17% 00:36:33.452 lat (msec) : 250=0.59% 00:36:33.452 cpu : usr=1.89%, sys=2.78%, ctx=216, majf=0, minf=2 00:36:33.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:36:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:33.452 issued rwts: total=2048,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:33.452 job3: (groupid=0, jobs=1): err= 0: pid=433152: Wed Nov 6 12:43:04 2024 00:36:33.452 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:36:33.452 slat (usec): min=2, max=15259, avg=123.65, stdev=924.74 00:36:33.452 clat (usec): min=3987, max=57297, avg=15901.49, stdev=6619.29 00:36:33.452 lat (usec): min=3999, max=57303, avg=16025.15, stdev=6703.16 00:36:33.452 clat percentiles (usec): 00:36:33.452 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10945], 20.00th=[11600], 00:36:33.452 | 30.00th=[12387], 40.00th=[13829], 50.00th=[14746], 60.00th=[15533], 00:36:33.452 | 70.00th=[16450], 80.00th=[17695], 90.00th=[21890], 95.00th=[26346], 00:36:33.452 | 99.00th=[47449], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:36:33.452 | 99.99th=[57410] 00:36:33.452 write: IOPS=4207, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1011msec); 0 zone resets 00:36:33.452 slat (usec): min=3, max=12559, avg=96.57, stdev=690.26 00:36:33.452 clat (usec): min=485, max=57291, avg=14843.21, stdev=6929.70 00:36:33.452 lat (usec): min=494, max=57297, avg=14939.77, stdev=6971.74 00:36:33.452 clat percentiles (usec): 00:36:33.452 | 1.00th=[ 3261], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 9765], 00:36:33.452 | 30.00th=[10814], 40.00th=[11863], 50.00th=[13042], 60.00th=[15008], 00:36:33.452 | 70.00th=[17433], 80.00th=[20579], 90.00th=[21365], 95.00th=[24773], 00:36:33.452 | 99.00th=[43779], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:36:33.452 | 99.99th=[57410] 00:36:33.452 bw ( KiB/s): min=15112, max=17904, per=23.37%, avg=16508.00, stdev=1974.24, samples=2 00:36:33.452 iops : min= 3778, max= 4476, avg=4127.00, stdev=493.56, samples=2 00:36:33.452 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.08% 00:36:33.452 lat (msec) : 2=0.05%, 4=0.51%, 10=12.83%, 20=69.14%, 50=16.91% 00:36:33.452 lat (msec) : 100=0.44% 00:36:33.452 cpu : usr=3.27%, sys=6.24%, ctx=262, majf=0, minf=2 00:36:33.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:33.452 issued rwts: total=4096,4254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:33.452 00:36:33.452 Run status group 0 (all jobs): 00:36:33.452 READ: bw=63.1MiB/s (66.1MB/s), 8135KiB/s-25.8MiB/s (8330kB/s-27.1MB/s), io=66.0MiB (69.2MB), run=1006-1047msec 00:36:33.452 WRITE: bw=69.0MiB/s (72.3MB/s), 9.83MiB/s-27.5MiB/s (10.3MB/s-28.9MB/s), io=72.2MiB (75.7MB), run=1006-1047msec 00:36:33.452 00:36:33.452 Disk stats (read/write): 00:36:33.452 nvme0n1: ios=5597/5639, merge=0/0, ticks=53887/47684, in_queue=101571, util=98.50% 00:36:33.452 nvme0n2: ios=3456/3584, merge=0/0, ticks=23220/27645, in_queue=50865, util=84.78% 00:36:33.452 nvme0n3: ios=1677/2048, merge=0/0, ticks=21617/36026, in_queue=57643, util=87.98% 00:36:33.452 nvme0n4: ios=3298/3584, merge=0/0, ticks=48675/49992, in_queue=98667, util=89.44% 00:36:33.452 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:33.452 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=433309 00:36:33.452 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:33.452 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:33.452 [global] 00:36:33.452 thread=1 00:36:33.452 invalidate=1 00:36:33.452 rw=read 00:36:33.452 time_based=1 00:36:33.452 runtime=10 00:36:33.452 ioengine=libaio 00:36:33.452 direct=1 00:36:33.452 bs=4096 00:36:33.452 iodepth=1 00:36:33.452 norandommap=1 00:36:33.452 numjobs=1 00:36:33.452 00:36:33.452 [job0] 00:36:33.452 filename=/dev/nvme0n1 00:36:33.452 [job1] 00:36:33.452 filename=/dev/nvme0n2 00:36:33.452 [job2] 00:36:33.452 filename=/dev/nvme0n3 00:36:33.452 [job3] 00:36:33.452 filename=/dev/nvme0n4 00:36:33.452 Could not set queue depth (nvme0n1) 00:36:33.452 Could not set queue depth (nvme0n2) 00:36:33.452 Could not set queue depth (nvme0n3) 00:36:33.452 Could not set queue depth (nvme0n4) 00:36:33.710 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:33.710 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:33.710 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:33.710 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:33.710 fio-3.35 00:36:33.710 Starting 4 threads 00:36:36.236 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:36.494 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:36.494 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:36:36.494 fio: pid=433593, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:36.752 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:36.752 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:37.009 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=303104, buflen=4096 00:36:37.009 fio: pid=433588, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.009 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.009 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:37.009 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:36:37.009 fio: pid=433564, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.268 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=352256, buflen=4096 00:36:37.268 fio: pid=433574, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:36:37.268 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.268 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:37.268 00:36:37.268 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=433564: Wed Nov 6 12:43:08 2024 00:36:37.268 read: IOPS=25, BW=101KiB/s (104kB/s)(324KiB/3202msec) 00:36:37.268 slat (usec): min=9, max=5759, avg=94.57, stdev=633.36 00:36:37.268 clat (usec): min=338, max=43022, avg=39156.71, stdev=8888.61 00:36:37.268 lat (usec): min=360, max=47973, avg=39252.19, stdev=8934.29 00:36:37.268 clat percentiles (usec): 00:36:37.268 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:37.268 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:37.268 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:36:37.268 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:37.268 | 99.99th=[43254] 00:36:37.268 bw ( KiB/s): min= 96, max= 106, per=28.50%, avg=101.67, stdev= 4.46, samples=6 00:36:37.268 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:36:37.268 lat (usec) : 500=3.66%, 750=1.22% 00:36:37.268 lat (msec) : 50=93.90% 00:36:37.268 cpu : usr=0.12%, sys=0.00%, ctx=84, majf=0, minf=1 00:36:37.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.268 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=433574: Wed Nov 6 12:43:08 2024 00:36:37.268 read: IOPS=25, BW=99.0KiB/s (101kB/s)(344KiB/3476msec) 00:36:37.268 slat (usec): min=9, max=17740, avg=693.06, stdev=3102.62 00:36:37.268 clat (usec): min=477, max=42121, avg=39701.82, stdev=7504.35 00:36:37.268 lat (usec): min=501, max=58824, avg=40263.30, stdev=8127.67 00:36:37.268 clat percentiles (usec): 00:36:37.268 | 1.00th=[ 478], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:37.268 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:37.268 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:37.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:37.268 | 99.99th=[42206] 00:36:37.268 bw ( KiB/s): min= 96, max= 106, per=28.21%, avg=100.33, stdev= 4.80, samples=6 00:36:37.268 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:36:37.268 lat (usec) : 500=2.30%, 750=1.15% 00:36:37.268 lat (msec) : 50=95.40% 00:36:37.268 cpu : usr=0.00%, sys=0.29%, ctx=91, majf=0, minf=2 00:36:37.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.268 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=433588: Wed Nov 6 12:43:08 2024 00:36:37.268 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(296KiB/3033msec) 00:36:37.268 slat (usec): min=11, max=13841, avg=208.28, stdev=1595.47 00:36:37.268 clat (usec): min=557, max=41976, avg=40483.66, stdev=4710.78 00:36:37.268 lat (usec): min=589, max=54939, avg=40694.41, stdev=4999.45 00:36:37.268 clat percentiles (usec): 00:36:37.268 | 1.00th=[ 562], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:37.268 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:37.268 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:37.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:37.268 | 99.99th=[42206] 00:36:37.268 bw ( KiB/s): min= 96, max= 104, per=27.37%, avg=97.60, stdev= 3.58, samples=5 00:36:37.268 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:36:37.268 lat (usec) : 750=1.33% 00:36:37.268 lat (msec) : 50=97.33% 00:36:37.268 cpu : usr=0.00%, sys=0.13%, ctx=77, majf=0, minf=2 00:36:37.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.268 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=433593: Wed Nov 6 12:43:08 2024 00:36:37.268 read: IOPS=24, BW=97.7KiB/s (100kB/s)(268KiB/2742msec) 00:36:37.268 slat (nsec): min=13427, max=36982, avg=24632.74, stdev=2223.10 00:36:37.268 clat (usec): min=456, max=42015, avg=40504.74, stdev=4979.00 00:36:37.268 lat (usec): min=493, max=42040, avg=40529.37, stdev=4977.45 00:36:37.268 clat percentiles (usec): 00:36:37.268 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:37.268 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:37.268 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:37.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:37.268 | 99.99th=[42206] 00:36:37.268 bw ( KiB/s): min= 96, max= 104, per=27.37%, avg=97.60, stdev= 3.58, samples=5 00:36:37.268 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:36:37.268 lat (usec) : 500=1.47% 00:36:37.268 lat (msec) : 50=97.06% 00:36:37.268 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=2 00:36:37.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.268 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.268 00:36:37.268 Run status group 0 (all jobs): 00:36:37.269 READ: bw=354KiB/s (363kB/s), 97.6KiB/s-101KiB/s (99.9kB/s-104kB/s), io=1232KiB (1262kB), run=2742-3476msec 00:36:37.269 00:36:37.269 Disk stats (read/write): 00:36:37.269 nvme0n1: ios=79/0, merge=0/0, ticks=3090/0, in_queue=3090, util=95.50% 00:36:37.269 nvme0n2: ios=83/0, merge=0/0, ticks=3293/0, in_queue=3293, util=95.09% 00:36:37.269 nvme0n3: ios=70/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.11% 00:36:37.269 nvme0n4: ios=64/0, merge=0/0, ticks=2593/0, in_queue=2593, util=96.44% 00:36:37.526 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.526 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:37.785 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.785 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:38.350 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:38.350 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:38.350 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:38.350 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 433309 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:38.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:38.916 nvmf hotplug test: fio failed as expected 00:36:38.916 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.174 rmmod nvme_tcp 00:36:39.174 rmmod nvme_fabrics 00:36:39.174 rmmod nvme_keyring 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 430256 ']' 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 430256 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 430256 ']' 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 430256 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 430256 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 430256' 00:36:39.174 killing process with pid 430256 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 430256 00:36:39.174 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 430256 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.433 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:42.034 00:36:42.034 real 0m28.627s 00:36:42.034 user 2m3.539s 00:36:42.034 sys 0m11.226s 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:42.034 ************************************ 00:36:42.034 END TEST nvmf_fio_target 00:36:42.034 ************************************ 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:42.034 ************************************ 00:36:42.034 START TEST nvmf_bdevio 00:36:42.034 ************************************ 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:42.034 * Looking for test storage... 00:36:42.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:42.034 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.035 --rc genhtml_branch_coverage=1 00:36:42.035 --rc genhtml_function_coverage=1 00:36:42.035 --rc genhtml_legend=1 00:36:42.035 --rc geninfo_all_blocks=1 00:36:42.035 --rc geninfo_unexecuted_blocks=1 00:36:42.035 00:36:42.035 ' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.035 --rc genhtml_branch_coverage=1 00:36:42.035 --rc genhtml_function_coverage=1 00:36:42.035 --rc genhtml_legend=1 00:36:42.035 --rc geninfo_all_blocks=1 00:36:42.035 --rc geninfo_unexecuted_blocks=1 00:36:42.035 00:36:42.035 ' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.035 --rc genhtml_branch_coverage=1 00:36:42.035 --rc genhtml_function_coverage=1 00:36:42.035 --rc genhtml_legend=1 00:36:42.035 --rc geninfo_all_blocks=1 00:36:42.035 --rc geninfo_unexecuted_blocks=1 00:36:42.035 00:36:42.035 ' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.035 --rc genhtml_branch_coverage=1 00:36:42.035 --rc genhtml_function_coverage=1 00:36:42.035 --rc genhtml_legend=1 00:36:42.035 --rc geninfo_all_blocks=1 00:36:42.035 --rc geninfo_unexecuted_blocks=1 00:36:42.035 00:36:42.035 ' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.035 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:47.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.390 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:47.391 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:47.391 Found net devices under 0000:af:00.0: cvl_0_0 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:47.391 Found net devices under 0000:af:00.1: cvl_0_1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:47.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:36:47.391 00:36:47.391 --- 10.0.0.2 ping statistics --- 00:36:47.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.391 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:36:47.391 00:36:47.391 --- 10.0.0.1 ping statistics --- 00:36:47.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.391 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:47.391 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:47.649 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=438163 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 438163 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 438163 ']' 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:47.650 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.650 [2024-11-06 12:43:19.091944] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:47.650 [2024-11-06 12:43:19.093317] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:36:47.650 [2024-11-06 12:43:19.093361] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.650 [2024-11-06 12:43:19.171397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.650 [2024-11-06 12:43:19.210914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.650 [2024-11-06 12:43:19.210951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.650 [2024-11-06 12:43:19.210958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.650 [2024-11-06 12:43:19.210964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.650 [2024-11-06 12:43:19.210969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.650 [2024-11-06 12:43:19.212632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:47.650 [2024-11-06 12:43:19.212739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:47.650 [2024-11-06 12:43:19.212851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.650 [2024-11-06 12:43:19.212853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:47.908 [2024-11-06 12:43:19.276607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.908 [2024-11-06 12:43:19.277148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:47.908 [2024-11-06 12:43:19.277663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:47.908 [2024-11-06 12:43:19.277742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:47.908 [2024-11-06 12:43:19.277883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 [2024-11-06 12:43:19.353201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 Malloc0 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.908 [2024-11-06 12:43:19.421422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.908 { 00:36:47.908 "params": { 00:36:47.908 "name": "Nvme$subsystem", 00:36:47.908 "trtype": "$TEST_TRANSPORT", 00:36:47.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.908 "adrfam": "ipv4", 00:36:47.908 "trsvcid": "$NVMF_PORT", 00:36:47.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.908 "hdgst": ${hdgst:-false}, 00:36:47.908 "ddgst": ${ddgst:-false} 00:36:47.908 }, 00:36:47.908 "method": "bdev_nvme_attach_controller" 00:36:47.908 } 00:36:47.908 EOF 00:36:47.908 )") 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:47.908 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.908 "params": { 00:36:47.908 "name": "Nvme1", 00:36:47.908 "trtype": "tcp", 00:36:47.908 "traddr": "10.0.0.2", 00:36:47.908 "adrfam": "ipv4", 00:36:47.908 "trsvcid": "4420", 00:36:47.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:47.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:47.908 "hdgst": false, 00:36:47.908 "ddgst": false 00:36:47.908 }, 00:36:47.908 "method": "bdev_nvme_attach_controller" 00:36:47.908 }' 00:36:47.908 [2024-11-06 12:43:19.475171] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:36:47.908 [2024-11-06 12:43:19.475229] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438293 ] 00:36:48.166 [2024-11-06 12:43:19.572004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:48.166 [2024-11-06 12:43:19.626109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.166 [2024-11-06 12:43:19.626129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.166 [2024-11-06 12:43:19.626133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.423 I/O targets: 00:36:48.423 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:48.423 00:36:48.423 00:36:48.423 CUnit - A unit testing framework for C - Version 2.1-3 00:36:48.423 http://cunit.sourceforge.net/ 00:36:48.423 00:36:48.423 00:36:48.423 Suite: bdevio tests on: Nvme1n1 00:36:48.423 Test: blockdev write read block ...passed 00:36:48.423 Test: blockdev write zeroes read block ...passed 00:36:48.423 Test: blockdev write zeroes read no split ...passed 00:36:48.423 Test: blockdev write zeroes read split ...passed 00:36:48.423 Test: blockdev write zeroes read split partial ...passed 00:36:48.423 Test: blockdev reset ...[2024-11-06 12:43:19.973284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:48.423 [2024-11-06 12:43:19.973364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f697c0 (9): Bad file descriptor 00:36:48.423 [2024-11-06 12:43:20.017856] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:48.423 passed 00:36:48.423 Test: blockdev write read 8 blocks ...passed 00:36:48.423 Test: blockdev write read size > 128k ...passed 00:36:48.423 Test: blockdev write read invalid size ...passed 00:36:48.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:48.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:48.682 Test: blockdev write read max offset ...passed 00:36:48.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:48.682 Test: blockdev writev readv 8 blocks ...passed 00:36:48.682 Test: blockdev writev readv 30 x 1block ...passed 00:36:48.682 Test: blockdev writev readv block ...passed 00:36:48.682 Test: blockdev writev readv size > 128k ...passed 00:36:48.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:48.682 Test: blockdev comparev and writev ...[2024-11-06 12:43:20.229697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.229731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.229745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.229752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:48.682 [2024-11-06 12:43:20.230752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.682 [2024-11-06 12:43:20.230758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:48.682 passed 00:36:48.941 Test: blockdev nvme passthru rw ...passed 00:36:48.941 Test: blockdev nvme passthru vendor specific ...[2024-11-06 12:43:20.312821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.941 [2024-11-06 12:43:20.312840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:48.941 [2024-11-06 12:43:20.312953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.941 [2024-11-06 12:43:20.312963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:48.941 [2024-11-06 12:43:20.313071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.941 [2024-11-06 12:43:20.313080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:48.941 [2024-11-06 12:43:20.313183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.941 [2024-11-06 12:43:20.313191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:48.941 passed 00:36:48.941 Test: blockdev nvme admin passthru ...passed 00:36:48.941 Test: blockdev copy ...passed 00:36:48.941 00:36:48.941 Run Summary: Type Total Ran Passed Failed Inactive 00:36:48.941 suites 1 1 n/a 0 0 00:36:48.941 tests 23 23 23 0 0 00:36:48.941 asserts 152 152 152 0 n/a 00:36:48.941 00:36:48.941 Elapsed time = 1.161 seconds 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.942 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.942 rmmod nvme_tcp 00:36:48.942 rmmod nvme_fabrics 00:36:49.201 rmmod nvme_keyring 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 438163 ']' 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 438163 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 438163 ']' 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 438163 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 438163 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 438163' 00:36:49.201 killing process with pid 438163 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 438163 00:36:49.201 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 438163 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.462 12:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.369 00:36:51.369 real 0m9.793s 00:36:51.369 user 0m8.909s 00:36:51.369 sys 0m5.070s 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:51.369 ************************************ 00:36:51.369 END TEST nvmf_bdevio 00:36:51.369 ************************************ 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:51.369 00:36:51.369 real 4m34.563s 00:36:51.369 user 10m10.309s 00:36:51.369 sys 1m48.165s 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:51.369 12:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:51.369 ************************************ 00:36:51.369 END TEST nvmf_target_core_interrupt_mode 00:36:51.369 ************************************ 00:36:51.369 12:43:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:51.369 12:43:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:51.369 12:43:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:51.369 12:43:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.628 ************************************ 00:36:51.628 START TEST nvmf_interrupt 00:36:51.628 ************************************ 00:36:51.628 12:43:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:51.628 * Looking for test storage... 00:36:51.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.628 --rc genhtml_branch_coverage=1 00:36:51.628 --rc genhtml_function_coverage=1 00:36:51.628 --rc genhtml_legend=1 00:36:51.628 --rc geninfo_all_blocks=1 00:36:51.628 --rc geninfo_unexecuted_blocks=1 00:36:51.628 00:36:51.628 ' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.628 --rc genhtml_branch_coverage=1 00:36:51.628 --rc genhtml_function_coverage=1 00:36:51.628 --rc genhtml_legend=1 00:36:51.628 --rc geninfo_all_blocks=1 00:36:51.628 --rc geninfo_unexecuted_blocks=1 00:36:51.628 00:36:51.628 ' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.628 --rc genhtml_branch_coverage=1 00:36:51.628 --rc genhtml_function_coverage=1 00:36:51.628 --rc genhtml_legend=1 00:36:51.628 --rc geninfo_all_blocks=1 00:36:51.628 --rc geninfo_unexecuted_blocks=1 00:36:51.628 00:36:51.628 ' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:51.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.628 --rc genhtml_branch_coverage=1 00:36:51.628 --rc genhtml_function_coverage=1 00:36:51.628 --rc genhtml_legend=1 00:36:51.628 --rc geninfo_all_blocks=1 00:36:51.628 --rc geninfo_unexecuted_blocks=1 00:36:51.628 00:36:51.628 ' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.628 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.629 12:43:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:56.898 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:56.899 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:56.899 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:56.899 Found net devices under 0000:af:00.0: cvl_0_0 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:56.899 Found net devices under 0000:af:00.1: cvl_0_1 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.899 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:36:57.158 00:36:57.158 --- 10.0.0.2 ping statistics --- 00:36:57.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.158 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:36:57.158 00:36:57.158 --- 10.0.0.1 ping statistics --- 00:36:57.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.158 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=442038 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 442038 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 442038 ']' 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:57.158 12:43:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:57.158 [2024-11-06 12:43:28.735930] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.158 [2024-11-06 12:43:28.736859] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:36:57.158 [2024-11-06 12:43:28.736893] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.417 [2024-11-06 12:43:28.822321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:57.417 [2024-11-06 12:43:28.872190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.417 [2024-11-06 12:43:28.872231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.417 [2024-11-06 12:43:28.872241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.417 [2024-11-06 12:43:28.872250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.417 [2024-11-06 12:43:28.872258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.417 [2024-11-06 12:43:28.873720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.417 [2024-11-06 12:43:28.873727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.417 [2024-11-06 12:43:28.948225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.417 [2024-11-06 12:43:28.948229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:57.417 [2024-11-06 12:43:28.948529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:58.352 5000+0 records in 00:36:58.352 5000+0 records out 00:36:58.352 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00683679 s, 1.5 GB/s 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.352 AIO0 00:36:58.352 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.353 [2024-11-06 12:43:29.742533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:58.353 [2024-11-06 12:43:29.770687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 442038 0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 0 idle 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442038 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.25 reactor_0' 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442038 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.25 reactor_0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 442038 1 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 1 idle 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:36:58.353 12:43:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442042 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.00 reactor_1' 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442042 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.00 reactor_1 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=442341 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 442038 0 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 442038 0 busy 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:36:58.611 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442038 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.48 reactor_0' 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442038 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.48 reactor_0 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 442038 1 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 442038 1 busy 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:58.869 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:36:58.870 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442042 root 20 0 128.2g 46592 34048 R 93.8 0.1 0:00.29 reactor_1' 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442042 root 20 0 128.2g 46592 34048 R 93.8 0.1 0:00.29 reactor_1 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:59.128 12:43:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 442341 00:37:09.094 Initializing NVMe Controllers 00:37:09.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:09.094 Controller IO queue size 256, less than required. 00:37:09.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:09.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:09.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:09.094 Initialization complete. Launching workers. 00:37:09.094 ======================================================== 00:37:09.094 Latency(us) 00:37:09.094 Device Information : IOPS MiB/s Average min max 00:37:09.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17734.30 69.27 14441.87 3441.63 17685.92 00:37:09.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 11353.80 44.35 22566.13 3628.36 29068.59 00:37:09.094 ======================================================== 00:37:09.094 Total : 29088.10 113.63 17612.97 3441.63 29068.59 00:37:09.094 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 442038 0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 0 idle 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442038 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:20.24 reactor_0' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442038 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:20.24 reactor_0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 442038 1 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 1 idle 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442042 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:09.99 reactor_1' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442042 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:09.99 reactor_1 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:09.094 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:09.095 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:09.095 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:09.095 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:09.095 12:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:09.095 12:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:09.660 12:43:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:09.660 12:43:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:37:09.660 12:43:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:37:09.660 12:43:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:37:09.660 12:43:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 442038 0 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 0 idle 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:37:11.562 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442038 root 20 0 128.2g 77056 34048 S 6.7 0.1 0:20.49 reactor_0' 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442038 root 20 0 128.2g 77056 34048 S 6.7 0.1 0:20.49 reactor_0 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 442038 1 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 442038 1 idle 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=442038 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:11.820 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 442038 -w 256 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 442042 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.06 reactor_1' 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 442042 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.06 reactor_1 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:12.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.078 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.078 rmmod nvme_tcp 00:37:12.078 rmmod nvme_fabrics 00:37:12.078 rmmod nvme_keyring 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 442038 ']' 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 442038 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 442038 ']' 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 442038 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 442038 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 442038' 00:37:12.336 killing process with pid 442038 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 442038 00:37:12.336 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 442038 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:12.594 12:43:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:14.494 12:43:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:14.494 00:37:14.494 real 0m23.048s 00:37:14.494 user 0m39.932s 00:37:14.494 sys 0m7.884s 00:37:14.494 12:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:14.494 12:43:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:14.494 ************************************ 00:37:14.494 END TEST nvmf_interrupt 00:37:14.494 ************************************ 00:37:14.494 00:37:14.494 real 28m34.507s 00:37:14.494 user 62m27.542s 00:37:14.494 sys 9m1.147s 00:37:14.494 12:43:46 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:14.494 12:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.494 ************************************ 00:37:14.494 END TEST nvmf_tcp 00:37:14.494 ************************************ 00:37:14.753 12:43:46 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:37:14.753 12:43:46 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:14.753 12:43:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:14.753 12:43:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:14.753 12:43:46 -- common/autotest_common.sh@10 -- # set +x 00:37:14.753 ************************************ 00:37:14.753 START TEST spdkcli_nvmf_tcp 00:37:14.753 ************************************ 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:14.753 * Looking for test storage... 00:37:14.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.753 --rc genhtml_branch_coverage=1 00:37:14.753 --rc genhtml_function_coverage=1 00:37:14.753 --rc genhtml_legend=1 00:37:14.753 --rc geninfo_all_blocks=1 00:37:14.753 --rc geninfo_unexecuted_blocks=1 00:37:14.753 00:37:14.753 ' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.753 --rc genhtml_branch_coverage=1 00:37:14.753 --rc genhtml_function_coverage=1 00:37:14.753 --rc genhtml_legend=1 00:37:14.753 --rc geninfo_all_blocks=1 00:37:14.753 --rc geninfo_unexecuted_blocks=1 00:37:14.753 00:37:14.753 ' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.753 --rc genhtml_branch_coverage=1 00:37:14.753 --rc genhtml_function_coverage=1 00:37:14.753 --rc genhtml_legend=1 00:37:14.753 --rc geninfo_all_blocks=1 00:37:14.753 --rc geninfo_unexecuted_blocks=1 00:37:14.753 00:37:14.753 ' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:14.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.753 --rc genhtml_branch_coverage=1 00:37:14.753 --rc genhtml_function_coverage=1 00:37:14.753 --rc genhtml_legend=1 00:37:14.753 --rc geninfo_all_blocks=1 00:37:14.753 --rc geninfo_unexecuted_blocks=1 00:37:14.753 00:37:14.753 ' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.753 12:43:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:14.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.754 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=445211 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 445211 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 445211 ']' 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:15.012 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:15.012 [2024-11-06 12:43:46.425628] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:37:15.012 [2024-11-06 12:43:46.425695] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445211 ] 00:37:15.012 [2024-11-06 12:43:46.519848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:15.012 [2024-11-06 12:43:46.570104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.012 [2024-11-06 12:43:46.570110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:15.269 12:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:15.269 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:15.269 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:15.269 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:15.269 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:15.269 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:15.269 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:15.269 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:15.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:15.270 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:15.270 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:15.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:15.270 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:15.270 ' 00:37:17.795 [2024-11-06 12:43:49.249537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.166 [2024-11-06 12:43:50.470147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:21.693 [2024-11-06 12:43:52.717586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:23.064 [2024-11-06 12:43:54.648096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:24.962 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:24.963 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:24.963 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:24.963 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:24.963 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:24.963 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:24.963 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:24.963 12:43:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:25.220 12:43:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:25.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:25.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:25.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:25.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:25.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:25.220 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:25.220 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:25.220 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:25.220 ' 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:30.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:30.481 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:30.481 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:30.481 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:30.481 12:44:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:30.481 12:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:30.481 12:44:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 445211 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 445211 ']' 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 445211 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 445211 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 445211' 00:37:30.481 killing process with pid 445211 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 445211 00:37:30.481 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 445211 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 445211 ']' 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 445211 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 445211 ']' 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 445211 00:37:30.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (445211) - No such process 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 445211 is not found' 00:37:30.740 Process with pid 445211 is not found 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:30.740 00:37:30.740 real 0m16.119s 00:37:30.740 user 0m33.638s 00:37:30.740 sys 0m0.757s 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:30.740 12:44:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.740 ************************************ 00:37:30.740 END TEST spdkcli_nvmf_tcp 00:37:30.740 ************************************ 00:37:30.740 12:44:02 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:30.740 12:44:02 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:30.740 12:44:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:30.740 12:44:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.740 ************************************ 00:37:30.740 START TEST nvmf_identify_passthru 00:37:30.740 ************************************ 00:37:30.740 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:30.999 * Looking for test storage... 00:37:30.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.999 12:44:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.999 --rc genhtml_branch_coverage=1 00:37:30.999 --rc genhtml_function_coverage=1 00:37:30.999 --rc genhtml_legend=1 00:37:30.999 --rc geninfo_all_blocks=1 00:37:30.999 --rc geninfo_unexecuted_blocks=1 00:37:30.999 00:37:30.999 ' 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.999 --rc genhtml_branch_coverage=1 00:37:30.999 --rc genhtml_function_coverage=1 00:37:30.999 --rc genhtml_legend=1 00:37:30.999 --rc geninfo_all_blocks=1 00:37:30.999 --rc geninfo_unexecuted_blocks=1 00:37:30.999 00:37:30.999 ' 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.999 --rc genhtml_branch_coverage=1 00:37:30.999 --rc genhtml_function_coverage=1 00:37:30.999 --rc genhtml_legend=1 00:37:30.999 --rc geninfo_all_blocks=1 00:37:30.999 --rc geninfo_unexecuted_blocks=1 00:37:30.999 00:37:30.999 ' 00:37:30.999 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:30.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.999 --rc genhtml_branch_coverage=1 00:37:30.999 --rc genhtml_function_coverage=1 00:37:30.999 --rc genhtml_legend=1 00:37:30.999 --rc geninfo_all_blocks=1 00:37:30.999 --rc geninfo_unexecuted_blocks=1 00:37:30.999 00:37:31.000 ' 00:37:31.000 12:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:31.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.000 12:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:31.000 12:44:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.000 12:44:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.000 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:31.000 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:31.000 12:44:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.000 12:44:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.261 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.261 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:36.262 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:36.262 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:36.262 Found net devices under 0000:af:00.0: cvl_0_0 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:36.262 Found net devices under 0000:af:00.1: cvl_0_1 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.262 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.520 12:44:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:36.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:36.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:37:36.520 00:37:36.520 --- 10.0.0.2 ping statistics --- 00:37:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.520 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:36.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:36.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:37:36.520 00:37:36.520 --- 10.0.0.1 ping statistics --- 00:37:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.520 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:36.520 12:44:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:36.520 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.520 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:36.520 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:37:36.777 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:37:36.777 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:37:36.777 12:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:86:00.0 00:37:36.777 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:37:36.777 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:37:36.777 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:37:36.777 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:36.777 12:44:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:40.958 12:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:37:40.958 12:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:37:40.958 12:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:40.958 12:44:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:45.140 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:45.140 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:45.140 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:45.140 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.140 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.398 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=452603 00:37:45.398 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:45.398 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:45.398 12:44:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 452603 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 452603 ']' 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:45.398 12:44:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.398 [2024-11-06 12:44:16.820607] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:37:45.398 [2024-11-06 12:44:16.820673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.398 [2024-11-06 12:44:16.921929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:45.398 [2024-11-06 12:44:16.975036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.398 [2024-11-06 12:44:16.975080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.398 [2024-11-06 12:44:16.975091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.398 [2024-11-06 12:44:16.975100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.398 [2024-11-06 12:44:16.975108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.398 [2024-11-06 12:44:16.977160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.398 [2024-11-06 12:44:16.977178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:45.398 [2024-11-06 12:44:16.977277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.398 [2024-11-06 12:44:16.977289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:37:45.655 12:44:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.655 INFO: Log level set to 20 00:37:45.655 INFO: Requests: 00:37:45.655 { 00:37:45.655 "jsonrpc": "2.0", 00:37:45.655 "method": "nvmf_set_config", 00:37:45.655 "id": 1, 00:37:45.655 "params": { 00:37:45.655 "admin_cmd_passthru": { 00:37:45.655 "identify_ctrlr": true 00:37:45.655 } 00:37:45.655 } 00:37:45.655 } 00:37:45.655 00:37:45.655 INFO: response: 00:37:45.655 { 00:37:45.655 "jsonrpc": "2.0", 00:37:45.655 "id": 1, 00:37:45.655 "result": true 00:37:45.655 } 00:37:45.655 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.655 12:44:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.655 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.655 INFO: Setting log level to 20 00:37:45.655 INFO: Setting log level to 20 00:37:45.655 INFO: Log level set to 20 00:37:45.655 INFO: Log level set to 20 00:37:45.655 INFO: Requests: 00:37:45.655 { 00:37:45.656 "jsonrpc": "2.0", 00:37:45.656 "method": "framework_start_init", 00:37:45.656 "id": 1 00:37:45.656 } 00:37:45.656 00:37:45.656 INFO: Requests: 00:37:45.656 { 00:37:45.656 "jsonrpc": "2.0", 00:37:45.656 "method": "framework_start_init", 00:37:45.656 "id": 1 00:37:45.656 } 00:37:45.656 00:37:45.656 [2024-11-06 12:44:17.127845] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:45.656 INFO: response: 00:37:45.656 { 00:37:45.656 "jsonrpc": "2.0", 00:37:45.656 "id": 1, 00:37:45.656 "result": true 00:37:45.656 } 00:37:45.656 00:37:45.656 INFO: response: 00:37:45.656 { 00:37:45.656 "jsonrpc": "2.0", 00:37:45.656 "id": 1, 00:37:45.656 "result": true 00:37:45.656 } 00:37:45.656 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.656 12:44:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.656 INFO: Setting log level to 40 00:37:45.656 INFO: Setting log level to 40 00:37:45.656 INFO: Setting log level to 40 00:37:45.656 [2024-11-06 12:44:17.137464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.656 12:44:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.656 12:44:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.656 12:44:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.933 Nvme0n1 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.933 [2024-11-06 12:44:20.068278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.933 [ 00:37:48.933 { 00:37:48.933 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:48.933 "subtype": "Discovery", 00:37:48.933 "listen_addresses": [], 00:37:48.933 "allow_any_host": true, 00:37:48.933 "hosts": [] 00:37:48.933 }, 00:37:48.933 { 00:37:48.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.933 "subtype": "NVMe", 00:37:48.933 "listen_addresses": [ 00:37:48.933 { 00:37:48.933 "trtype": "TCP", 00:37:48.933 "adrfam": "IPv4", 00:37:48.933 "traddr": "10.0.0.2", 00:37:48.933 "trsvcid": "4420" 00:37:48.933 } 00:37:48.933 ], 00:37:48.933 "allow_any_host": true, 00:37:48.933 "hosts": [], 00:37:48.933 "serial_number": "SPDK00000000000001", 00:37:48.933 "model_number": "SPDK bdev Controller", 00:37:48.933 "max_namespaces": 1, 00:37:48.933 "min_cntlid": 1, 00:37:48.933 "max_cntlid": 65519, 00:37:48.933 "namespaces": [ 00:37:48.933 { 00:37:48.933 "nsid": 1, 00:37:48.933 "bdev_name": "Nvme0n1", 00:37:48.933 "name": "Nvme0n1", 00:37:48.933 "nguid": "7A8A1B8EECC34C288CE06E67EE482F10", 00:37:48.933 "uuid": "7a8a1b8e-ecc3-4c28-8ce0-6e67ee482f10" 00:37:48.933 } 00:37:48.933 ] 00:37:48.933 } 00:37:48.933 ] 00:37:48.933 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:48.933 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:49.190 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.190 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:49.190 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:49.190 12:44:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.190 rmmod nvme_tcp 00:37:49.190 rmmod nvme_fabrics 00:37:49.190 rmmod nvme_keyring 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 452603 ']' 00:37:49.190 12:44:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 452603 00:37:49.190 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 452603 ']' 00:37:49.190 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 452603 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 452603 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 452603' 00:37:49.447 killing process with pid 452603 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 452603 00:37:49.447 12:44:20 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 452603 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.817 12:44:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.817 12:44:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:50.817 12:44:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.345 12:44:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:53.345 00:37:53.345 real 0m22.129s 00:37:53.345 user 0m28.329s 00:37:53.345 sys 0m6.013s 00:37:53.345 12:44:24 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:53.345 12:44:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:53.345 ************************************ 00:37:53.345 END TEST nvmf_identify_passthru 00:37:53.345 ************************************ 00:37:53.345 12:44:24 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:53.345 12:44:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:53.345 12:44:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:53.345 12:44:24 -- common/autotest_common.sh@10 -- # set +x 00:37:53.345 ************************************ 00:37:53.345 START TEST nvmf_dif 00:37:53.345 ************************************ 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:53.345 * Looking for test storage... 00:37:53.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:53.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.345 --rc genhtml_branch_coverage=1 00:37:53.345 --rc genhtml_function_coverage=1 00:37:53.345 --rc genhtml_legend=1 00:37:53.345 --rc geninfo_all_blocks=1 00:37:53.345 --rc geninfo_unexecuted_blocks=1 00:37:53.345 00:37:53.345 ' 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:53.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.345 --rc genhtml_branch_coverage=1 00:37:53.345 --rc genhtml_function_coverage=1 00:37:53.345 --rc genhtml_legend=1 00:37:53.345 --rc geninfo_all_blocks=1 00:37:53.345 --rc geninfo_unexecuted_blocks=1 00:37:53.345 00:37:53.345 ' 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:53.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.345 --rc genhtml_branch_coverage=1 00:37:53.345 --rc genhtml_function_coverage=1 00:37:53.345 --rc genhtml_legend=1 00:37:53.345 --rc geninfo_all_blocks=1 00:37:53.345 --rc geninfo_unexecuted_blocks=1 00:37:53.345 00:37:53.345 ' 00:37:53.345 12:44:24 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:53.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.345 --rc genhtml_branch_coverage=1 00:37:53.345 --rc genhtml_function_coverage=1 00:37:53.345 --rc genhtml_legend=1 00:37:53.345 --rc geninfo_all_blocks=1 00:37:53.345 --rc geninfo_unexecuted_blocks=1 00:37:53.345 00:37:53.345 ' 00:37:53.345 12:44:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.345 12:44:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.345 12:44:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.345 12:44:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.346 12:44:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.346 12:44:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.346 12:44:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:53.346 12:44:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.346 12:44:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:53.346 12:44:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:53.346 12:44:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:53.346 12:44:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:53.346 12:44:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.346 12:44:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:53.346 12:44:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:53.346 12:44:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:53.346 12:44:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:58.606 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:58.606 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:58.606 Found net devices under 0000:af:00.0: cvl_0_0 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:58.606 Found net devices under 0000:af:00.1: cvl_0_1 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.606 12:44:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.606 12:44:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:37:58.607 00:37:58.607 --- 10.0.0.2 ping statistics --- 00:37:58.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.607 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:37:58.607 00:37:58.607 --- 10.0.0.1 ping statistics --- 00:37:58.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.607 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:58.607 12:44:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:01.134 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:01.134 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:38:01.134 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:01.134 12:44:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:01.134 12:44:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=458427 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 458427 00:38:01.134 12:44:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 458427 ']' 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:01.134 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.134 [2024-11-06 12:44:32.678771] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:38:01.134 [2024-11-06 12:44:32.678827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.391 [2024-11-06 12:44:32.778718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.391 [2024-11-06 12:44:32.827212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.391 [2024-11-06 12:44:32.827251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.391 [2024-11-06 12:44:32.827262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.391 [2024-11-06 12:44:32.827271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.391 [2024-11-06 12:44:32.827278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.391 [2024-11-06 12:44:32.827966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:38:01.391 12:44:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.391 12:44:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.391 12:44:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:01.391 12:44:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.391 [2024-11-06 12:44:32.978585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.391 12:44:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:01.391 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.649 ************************************ 00:38:01.649 START TEST fio_dif_1_default 00:38:01.649 ************************************ 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.649 bdev_null0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.649 [2024-11-06 12:44:33.054935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.649 { 00:38:01.649 "params": { 00:38:01.649 "name": "Nvme$subsystem", 00:38:01.649 "trtype": "$TEST_TRANSPORT", 00:38:01.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.649 "adrfam": "ipv4", 00:38:01.649 "trsvcid": "$NVMF_PORT", 00:38:01.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.649 "hdgst": ${hdgst:-false}, 00:38:01.649 "ddgst": ${ddgst:-false} 00:38:01.649 }, 00:38:01.649 "method": "bdev_nvme_attach_controller" 00:38:01.649 } 00:38:01.649 EOF 00:38:01.649 )") 00:38:01.649 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.650 "params": { 00:38:01.650 "name": "Nvme0", 00:38:01.650 "trtype": "tcp", 00:38:01.650 "traddr": "10.0.0.2", 00:38:01.650 "adrfam": "ipv4", 00:38:01.650 "trsvcid": "4420", 00:38:01.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:01.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:01.650 "hdgst": false, 00:38:01.650 "ddgst": false 00:38:01.650 }, 00:38:01.650 "method": "bdev_nvme_attach_controller" 00:38:01.650 }' 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:01.650 12:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.907 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:01.907 fio-3.35 00:38:01.907 Starting 1 thread 00:38:14.093 00:38:14.093 filename0: (groupid=0, jobs=1): err= 0: pid=458851: Wed Nov 6 12:44:44 2024 00:38:14.093 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:38:14.093 slat (nsec): min=2517, max=16495, avg=5445.81, stdev=603.41 00:38:14.093 clat (usec): min=40832, max=46268, avg=41035.11, stdev=378.12 00:38:14.093 lat (usec): min=40837, max=46277, avg=41040.55, stdev=378.06 00:38:14.093 clat percentiles (usec): 00:38:14.093 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:14.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:14.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:14.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:38:14.093 | 99.99th=[46400] 00:38:14.093 bw ( KiB/s): min= 384, max= 416, per=99.55%, avg=388.80, stdev=11.72, samples=20 00:38:14.093 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:14.093 lat (msec) : 50=100.00% 00:38:14.093 cpu : usr=92.93%, sys=6.82%, ctx=12, majf=0, minf=0 00:38:14.093 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.093 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.093 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:14.093 00:38:14.093 Run status group 0 (all jobs): 00:38:14.093 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10017-10017msec 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.093 00:38:14.093 real 0m11.275s 00:38:14.093 user 0m21.314s 00:38:14.093 sys 0m0.999s 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.093 ************************************ 00:38:14.093 END TEST fio_dif_1_default 00:38:14.093 ************************************ 00:38:14.093 12:44:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:14.093 12:44:44 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:14.093 12:44:44 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:14.093 12:44:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.093 ************************************ 00:38:14.093 START TEST fio_dif_1_multi_subsystems 00:38:14.093 ************************************ 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:14.093 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 bdev_null0 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 [2024-11-06 12:44:44.385883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 bdev_null1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.094 { 00:38:14.094 "params": { 00:38:14.094 "name": "Nvme$subsystem", 00:38:14.094 "trtype": "$TEST_TRANSPORT", 00:38:14.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.094 "adrfam": "ipv4", 00:38:14.094 "trsvcid": "$NVMF_PORT", 00:38:14.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.094 "hdgst": ${hdgst:-false}, 00:38:14.094 "ddgst": ${ddgst:-false} 00:38:14.094 }, 00:38:14.094 "method": "bdev_nvme_attach_controller" 00:38:14.094 } 00:38:14.094 EOF 00:38:14.094 )") 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.094 { 00:38:14.094 "params": { 00:38:14.094 "name": "Nvme$subsystem", 00:38:14.094 "trtype": "$TEST_TRANSPORT", 00:38:14.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.094 "adrfam": "ipv4", 00:38:14.094 "trsvcid": "$NVMF_PORT", 00:38:14.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.094 "hdgst": ${hdgst:-false}, 00:38:14.094 "ddgst": ${ddgst:-false} 00:38:14.094 }, 00:38:14.094 "method": "bdev_nvme_attach_controller" 00:38:14.094 } 00:38:14.094 EOF 00:38:14.094 )") 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.094 "params": { 00:38:14.094 "name": "Nvme0", 00:38:14.094 "trtype": "tcp", 00:38:14.094 "traddr": "10.0.0.2", 00:38:14.094 "adrfam": "ipv4", 00:38:14.094 "trsvcid": "4420", 00:38:14.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.094 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.094 "hdgst": false, 00:38:14.094 "ddgst": false 00:38:14.094 }, 00:38:14.094 "method": "bdev_nvme_attach_controller" 00:38:14.094 },{ 00:38:14.094 "params": { 00:38:14.094 "name": "Nvme1", 00:38:14.094 "trtype": "tcp", 00:38:14.094 "traddr": "10.0.0.2", 00:38:14.094 "adrfam": "ipv4", 00:38:14.094 "trsvcid": "4420", 00:38:14.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:14.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:14.094 "hdgst": false, 00:38:14.094 "ddgst": false 00:38:14.094 }, 00:38:14.094 "method": "bdev_nvme_attach_controller" 00:38:14.094 }' 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:14.094 12:44:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:14.094 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:14.094 fio-3.35 00:38:14.095 Starting 2 threads 00:38:24.063 00:38:24.063 filename0: (groupid=0, jobs=1): err= 0: pid=460847: Wed Nov 6 12:44:55 2024 00:38:24.063 read: IOPS=188, BW=755KiB/s (774kB/s)(7584KiB/10039msec) 00:38:24.063 slat (nsec): min=9139, max=32808, avg=10226.70, stdev=2088.99 00:38:24.063 clat (usec): min=586, max=42422, avg=21148.96, stdev=20455.07 00:38:24.063 lat (usec): min=596, max=42431, avg=21159.19, stdev=20454.42 00:38:24.063 clat percentiles (usec): 00:38:24.063 | 1.00th=[ 594], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 611], 00:38:24.063 | 30.00th=[ 619], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:38:24.063 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:24.063 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:24.063 | 99.99th=[42206] 00:38:24.063 bw ( KiB/s): min= 672, max= 768, per=50.25%, avg=756.80, stdev=28.00, samples=20 00:38:24.063 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:38:24.063 lat (usec) : 750=49.58%, 1000=0.21% 00:38:24.063 lat (msec) : 50=50.21% 00:38:24.063 cpu : usr=95.83%, sys=3.86%, ctx=13, majf=0, minf=16 00:38:24.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.063 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:24.063 filename1: (groupid=0, jobs=1): err= 0: pid=460848: Wed Nov 6 12:44:55 2024 00:38:24.063 read: IOPS=187, BW=750KiB/s (768kB/s)(7520KiB/10032msec) 00:38:24.063 slat (nsec): min=9167, max=32582, avg=10308.17, stdev=2183.31 00:38:24.063 clat (usec): min=566, max=43347, avg=21314.30, stdev=20547.08 00:38:24.063 lat (usec): min=575, max=43380, avg=21324.60, stdev=20546.46 00:38:24.063 clat percentiles (usec): 00:38:24.063 | 1.00th=[ 578], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 594], 00:38:24.063 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[41157], 60.00th=[41157], 00:38:24.063 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:24.063 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:24.063 | 99.99th=[43254] 00:38:24.063 bw ( KiB/s): min= 672, max= 768, per=49.85%, avg=750.40, stdev=30.22, samples=20 00:38:24.063 iops : min= 168, max= 192, avg=187.60, stdev= 7.56, samples=20 00:38:24.063 lat (usec) : 750=49.57% 00:38:24.063 lat (msec) : 50=50.43% 00:38:24.063 cpu : usr=96.08%, sys=3.61%, ctx=13, majf=0, minf=31 00:38:24.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.063 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:24.063 00:38:24.063 Run status group 0 (all jobs): 00:38:24.063 READ: bw=1505KiB/s (1541kB/s), 750KiB/s-755KiB/s (768kB/s-774kB/s), io=14.8MiB (15.5MB), run=10032-10039msec 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.322 00:38:24.322 real 0m11.445s 00:38:24.322 user 0m30.914s 00:38:24.322 sys 0m1.095s 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 ************************************ 00:38:24.322 END TEST fio_dif_1_multi_subsystems 00:38:24.322 ************************************ 00:38:24.322 12:44:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:24.322 12:44:55 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:24.322 12:44:55 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:24.322 12:44:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 ************************************ 00:38:24.322 START TEST fio_dif_rand_params 00:38:24.322 ************************************ 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.323 bdev_null0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.323 [2024-11-06 12:44:55.902182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.323 { 00:38:24.323 "params": { 00:38:24.323 "name": "Nvme$subsystem", 00:38:24.323 "trtype": "$TEST_TRANSPORT", 00:38:24.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.323 "adrfam": "ipv4", 00:38:24.323 "trsvcid": "$NVMF_PORT", 00:38:24.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.323 "hdgst": ${hdgst:-false}, 00:38:24.323 "ddgst": ${ddgst:-false} 00:38:24.323 }, 00:38:24.323 "method": "bdev_nvme_attach_controller" 00:38:24.323 } 00:38:24.323 EOF 00:38:24.323 )") 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:24.323 12:44:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.323 "params": { 00:38:24.323 "name": "Nvme0", 00:38:24.323 "trtype": "tcp", 00:38:24.323 "traddr": "10.0.0.2", 00:38:24.323 "adrfam": "ipv4", 00:38:24.323 "trsvcid": "4420", 00:38:24.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.323 "hdgst": false, 00:38:24.323 "ddgst": false 00:38:24.323 }, 00:38:24.323 "method": "bdev_nvme_attach_controller" 00:38:24.323 }' 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.597 12:44:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:24.857 ... 00:38:24.857 fio-3.35 00:38:24.857 Starting 3 threads 00:38:31.542 00:38:31.542 filename0: (groupid=0, jobs=1): err= 0: pid=463078: Wed Nov 6 12:45:02 2024 00:38:31.542 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(135MiB/5010msec) 00:38:31.542 slat (nsec): min=2629, max=96787, avg=10370.70, stdev=3396.08 00:38:31.542 clat (usec): min=6864, max=55538, avg=13910.36, stdev=6943.87 00:38:31.542 lat (usec): min=6870, max=55549, avg=13920.73, stdev=6943.79 00:38:31.542 clat percentiles (usec): 00:38:31.542 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11338], 00:38:31.542 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:38:31.542 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15401], 95.00th=[16581], 00:38:31.542 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[55313], 00:38:31.542 | 99.99th=[55313] 00:38:31.542 bw ( KiB/s): min=21504, max=30464, per=33.83%, avg=27571.20, stdev=2861.02, samples=10 00:38:31.542 iops : min= 168, max= 238, avg=215.40, stdev=22.35, samples=10 00:38:31.542 lat (msec) : 10=8.99%, 20=87.67%, 50=1.02%, 100=2.32% 00:38:31.542 cpu : usr=95.23%, sys=4.45%, ctx=8, majf=0, minf=57 00:38:31.542 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 issued rwts: total=1079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.542 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.542 filename0: (groupid=0, jobs=1): err= 0: pid=463079: Wed Nov 6 12:45:02 2024 00:38:31.542 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5045msec) 00:38:31.542 slat (nsec): min=9218, max=59658, avg=14364.27, stdev=2654.58 00:38:31.542 clat (usec): min=6650, max=57997, avg=14129.43, stdev=6734.93 00:38:31.542 lat (usec): min=6660, max=58012, avg=14143.79, stdev=6734.82 00:38:31.542 clat percentiles (usec): 00:38:31.542 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11207], 00:38:31.542 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:38:31.542 | 70.00th=[14484], 80.00th=[15270], 90.00th=[16581], 95.00th=[17695], 00:38:31.542 | 99.00th=[52691], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:38:31.542 | 99.99th=[57934] 00:38:31.542 bw ( KiB/s): min=19200, max=31488, per=33.46%, avg=27264.00, stdev=3258.91, samples=10 00:38:31.542 iops : min= 150, max= 246, avg=213.00, stdev=25.46, samples=10 00:38:31.542 lat (msec) : 10=11.06%, 20=86.13%, 50=0.75%, 100=2.06% 00:38:31.542 cpu : usr=94.19%, sys=5.49%, ctx=9, majf=0, minf=68 00:38:31.542 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 issued rwts: total=1067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.542 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.542 filename0: (groupid=0, jobs=1): err= 0: pid=463080: Wed Nov 6 12:45:02 2024 00:38:31.542 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(133MiB/5006msec) 00:38:31.542 slat (nsec): min=9270, max=29071, avg=14755.14, stdev=1950.34 00:38:31.542 clat (usec): min=4489, max=56682, avg=14070.34, stdev=4354.81 00:38:31.542 lat (usec): min=4499, max=56697, avg=14085.09, stdev=4355.19 00:38:31.542 clat percentiles (usec): 00:38:31.542 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[11076], 00:38:31.542 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14353], 60.00th=[15008], 00:38:31.542 | 70.00th=[15664], 80.00th=[16450], 90.00th=[17433], 95.00th=[17957], 00:38:31.542 | 99.00th=[19792], 99.50th=[54789], 99.90th=[56361], 99.95th=[56886], 00:38:31.542 | 99.99th=[56886] 00:38:31.542 bw ( KiB/s): min=23808, max=35328, per=33.42%, avg=27238.40, stdev=3346.98, samples=10 00:38:31.542 iops : min= 186, max= 276, avg=212.80, stdev=26.15, samples=10 00:38:31.542 lat (msec) : 10=13.88%, 20=85.27%, 50=0.28%, 100=0.56% 00:38:31.542 cpu : usr=94.55%, sys=5.11%, ctx=10, majf=0, minf=45 00:38:31.542 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.542 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.542 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.542 00:38:31.542 Run status group 0 (all jobs): 00:38:31.542 READ: bw=79.6MiB/s (83.4MB/s), 26.4MiB/s-26.9MiB/s (27.7MB/s-28.2MB/s), io=402MiB (421MB), run=5006-5045msec 00:38:31.542 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:31.542 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 bdev_null0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 [2024-11-06 12:45:02.380509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 bdev_null1 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 bdev_null2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.543 { 00:38:31.543 "params": { 00:38:31.543 "name": "Nvme$subsystem", 00:38:31.543 "trtype": "$TEST_TRANSPORT", 00:38:31.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.543 "adrfam": "ipv4", 00:38:31.543 "trsvcid": "$NVMF_PORT", 00:38:31.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.543 "hdgst": ${hdgst:-false}, 00:38:31.543 "ddgst": ${ddgst:-false} 00:38:31.543 }, 00:38:31.543 "method": "bdev_nvme_attach_controller" 00:38:31.543 } 00:38:31.543 EOF 00:38:31.543 )") 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.543 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.544 { 00:38:31.544 "params": { 00:38:31.544 "name": "Nvme$subsystem", 00:38:31.544 "trtype": "$TEST_TRANSPORT", 00:38:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.544 "adrfam": "ipv4", 00:38:31.544 "trsvcid": "$NVMF_PORT", 00:38:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.544 "hdgst": ${hdgst:-false}, 00:38:31.544 "ddgst": ${ddgst:-false} 00:38:31.544 }, 00:38:31.544 "method": "bdev_nvme_attach_controller" 00:38:31.544 } 00:38:31.544 EOF 00:38:31.544 )") 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.544 { 00:38:31.544 "params": { 00:38:31.544 "name": "Nvme$subsystem", 00:38:31.544 "trtype": "$TEST_TRANSPORT", 00:38:31.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.544 "adrfam": "ipv4", 00:38:31.544 "trsvcid": "$NVMF_PORT", 00:38:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.544 "hdgst": ${hdgst:-false}, 00:38:31.544 "ddgst": ${ddgst:-false} 00:38:31.544 }, 00:38:31.544 "method": "bdev_nvme_attach_controller" 00:38:31.544 } 00:38:31.544 EOF 00:38:31.544 )") 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:31.544 "params": { 00:38:31.544 "name": "Nvme0", 00:38:31.544 "trtype": "tcp", 00:38:31.544 "traddr": "10.0.0.2", 00:38:31.544 "adrfam": "ipv4", 00:38:31.544 "trsvcid": "4420", 00:38:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.544 "hdgst": false, 00:38:31.544 "ddgst": false 00:38:31.544 }, 00:38:31.544 "method": "bdev_nvme_attach_controller" 00:38:31.544 },{ 00:38:31.544 "params": { 00:38:31.544 "name": "Nvme1", 00:38:31.544 "trtype": "tcp", 00:38:31.544 "traddr": "10.0.0.2", 00:38:31.544 "adrfam": "ipv4", 00:38:31.544 "trsvcid": "4420", 00:38:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:31.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:31.544 "hdgst": false, 00:38:31.544 "ddgst": false 00:38:31.544 }, 00:38:31.544 "method": "bdev_nvme_attach_controller" 00:38:31.544 },{ 00:38:31.544 "params": { 00:38:31.544 "name": "Nvme2", 00:38:31.544 "trtype": "tcp", 00:38:31.544 "traddr": "10.0.0.2", 00:38:31.544 "adrfam": "ipv4", 00:38:31.544 "trsvcid": "4420", 00:38:31.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:31.544 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:31.544 "hdgst": false, 00:38:31.544 "ddgst": false 00:38:31.544 }, 00:38:31.544 "method": "bdev_nvme_attach_controller" 00:38:31.544 }' 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:31.544 12:45:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.544 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.544 ... 00:38:31.544 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.544 ... 00:38:31.544 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.544 ... 00:38:31.544 fio-3.35 00:38:31.544 Starting 24 threads 00:38:43.730 00:38:43.730 filename0: (groupid=0, jobs=1): err= 0: pid=464410: Wed Nov 6 12:45:13 2024 00:38:43.730 read: IOPS=420, BW=1681KiB/s (1721kB/s)(16.4MiB/10016msec) 00:38:43.730 slat (nsec): min=7924, max=64061, avg=23194.13, stdev=11557.08 00:38:43.730 clat (usec): min=24369, max=43922, avg=37905.06, stdev=991.68 00:38:43.730 lat (usec): min=24386, max=43949, avg=37928.25, stdev=990.21 00:38:43.730 clat percentiles (usec): 00:38:43.730 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.730 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.730 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:38:43.730 | 99.00th=[39060], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:38:43.730 | 99.99th=[43779] 00:38:43.730 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1677.63, stdev=38.96, samples=19 00:38:43.730 iops : min= 416, max= 448, avg=419.37, stdev= 9.75, samples=19 00:38:43.730 lat (msec) : 50=100.00% 00:38:43.730 cpu : usr=98.06%, sys=1.55%, ctx=13, majf=0, minf=46 00:38:43.730 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.730 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.730 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.730 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.730 filename0: (groupid=0, jobs=1): err= 0: pid=464411: Wed Nov 6 12:45:13 2024 00:38:43.730 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10004msec) 00:38:43.730 slat (usec): min=12, max=101, avg=42.08, stdev=17.70 00:38:43.730 clat (usec): min=12352, max=39910, avg=37496.51, stdev=2122.25 00:38:43.730 lat (usec): min=12370, max=39932, avg=37538.59, stdev=2124.07 00:38:43.730 clat percentiles (usec): 00:38:43.730 | 1.00th=[27132], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.730 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.730 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.730 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.730 | 99.99th=[40109] 00:38:43.730 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1684.21, stdev=47.95, samples=19 00:38:43.730 iops : min= 416, max= 448, avg=421.05, stdev=11.99, samples=19 00:38:43.731 lat (msec) : 20=0.76%, 50=99.24% 00:38:43.731 cpu : usr=98.46%, sys=1.14%, ctx=9, majf=0, minf=41 00:38:43.731 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464412: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.5MiB/10022msec) 00:38:43.731 slat (usec): min=7, max=107, avg=40.73, stdev=18.34 00:38:43.731 clat (usec): min=18991, max=39899, avg=37564.67, stdev=1534.65 00:38:43.731 lat (usec): min=19006, max=39963, avg=37605.40, stdev=1537.20 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[27132], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.731 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.731 | 99.99th=[40109] 00:38:43.731 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1684.21, stdev=47.95, samples=19 00:38:43.731 iops : min= 416, max= 448, avg=421.05, stdev=11.99, samples=19 00:38:43.731 lat (msec) : 20=0.38%, 50=99.62% 00:38:43.731 cpu : usr=98.52%, sys=1.08%, ctx=11, majf=0, minf=34 00:38:43.731 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464413: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10022msec) 00:38:43.731 slat (usec): min=8, max=102, avg=38.15, stdev=17.78 00:38:43.731 clat (usec): min=21617, max=47127, avg=37530.93, stdev=1795.43 00:38:43.731 lat (usec): min=21628, max=47137, avg=37569.08, stdev=1798.91 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[27395], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.731 | 99.00th=[39584], 99.50th=[39584], 99.90th=[45351], 99.95th=[46924], 00:38:43.731 | 99.99th=[46924] 00:38:43.731 bw ( KiB/s): min= 1664, max= 1840, per=4.17%, avg=1686.74, stdev=54.73, samples=19 00:38:43.731 iops : min= 416, max= 460, avg=421.68, stdev=13.68, samples=19 00:38:43.731 lat (msec) : 50=100.00% 00:38:43.731 cpu : usr=98.45%, sys=1.15%, ctx=14, majf=0, minf=32 00:38:43.731 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464414: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10004msec) 00:38:43.731 slat (nsec): min=6801, max=70551, avg=26337.55, stdev=12317.99 00:38:43.731 clat (usec): min=10030, max=39884, avg=37681.29, stdev=2137.65 00:38:43.731 lat (usec): min=10060, max=39912, avg=37707.62, stdev=2137.37 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[27132], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.731 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[40109], 00:38:43.731 | 99.99th=[40109] 00:38:43.731 bw ( KiB/s): min= 1664, max= 1792, per=4.17%, avg=1684.21, stdev=47.95, samples=19 00:38:43.731 iops : min= 416, max= 448, avg=421.05, stdev=11.99, samples=19 00:38:43.731 lat (msec) : 20=0.76%, 50=99.24% 00:38:43.731 cpu : usr=98.67%, sys=1.00%, ctx=21, majf=0, minf=78 00:38:43.731 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464415: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=418, BW=1675KiB/s (1716kB/s)(16.4MiB/10008msec) 00:38:43.731 slat (nsec): min=5103, max=44322, avg=18917.34, stdev=5159.02 00:38:43.731 clat (usec): min=21814, max=81904, avg=38026.33, stdev=3090.96 00:38:43.731 lat (usec): min=21825, max=81920, avg=38045.25, stdev=3090.44 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[22414], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:38:43.731 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:38:43.731 | 99.00th=[53740], 99.50th=[53740], 99.90th=[62129], 99.95th=[62129], 00:38:43.731 | 99.99th=[82314] 00:38:43.731 bw ( KiB/s): min= 1539, max= 1792, per=4.13%, avg=1670.89, stdev=49.39, samples=19 00:38:43.731 iops : min= 384, max= 448, avg=417.68, stdev=12.46, samples=19 00:38:43.731 lat (msec) : 50=98.33%, 100=1.67% 00:38:43.731 cpu : usr=98.45%, sys=1.14%, ctx=13, majf=0, minf=36 00:38:43.731 IO depths : 1=5.5%, 2=11.6%, 4=24.4%, 8=51.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464416: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=420, BW=1680KiB/s (1720kB/s)(16.4MiB/10019msec) 00:38:43.731 slat (nsec): min=8271, max=69921, avg=32917.22, stdev=10931.67 00:38:43.731 clat (usec): min=22823, max=52590, avg=37832.64, stdev=1215.93 00:38:43.731 lat (usec): min=22869, max=52604, avg=37865.55, stdev=1214.79 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.731 | 99.00th=[39060], 99.50th=[39584], 99.90th=[48497], 99.95th=[48497], 00:38:43.731 | 99.99th=[52691] 00:38:43.731 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.731 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.731 lat (msec) : 50=99.95%, 100=0.05% 00:38:43.731 cpu : usr=98.30%, sys=1.31%, ctx=12, majf=0, minf=29 00:38:43.731 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename0: (groupid=0, jobs=1): err= 0: pid=464417: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=420, BW=1682KiB/s (1722kB/s)(16.4MiB/10010msec) 00:38:43.731 slat (usec): min=9, max=101, avg=38.66, stdev=17.79 00:38:43.731 clat (usec): min=18421, max=42987, avg=37673.22, stdev=897.78 00:38:43.731 lat (usec): min=18434, max=43008, avg=37711.88, stdev=900.82 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[34341], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.731 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.731 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[40109], 00:38:43.731 | 99.99th=[42730] 00:38:43.731 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1677.47, stdev=40.36, samples=19 00:38:43.731 iops : min= 416, max= 448, avg=419.37, stdev=10.09, samples=19 00:38:43.731 lat (msec) : 20=0.05%, 50=99.95% 00:38:43.731 cpu : usr=98.37%, sys=1.22%, ctx=20, majf=0, minf=32 00:38:43.731 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename1: (groupid=0, jobs=1): err= 0: pid=464418: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=427, BW=1712KiB/s (1753kB/s)(16.7MiB/10006msec) 00:38:43.731 slat (usec): min=5, max=108, avg=30.52, stdev=18.94 00:38:43.731 clat (msec): min=18, max=103, avg=37.13, stdev= 4.85 00:38:43.731 lat (msec): min=18, max=103, avg=37.16, stdev= 4.85 00:38:43.731 clat percentiles (msec): 00:38:43.731 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 38], 00:38:43.731 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39], 00:38:43.731 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 41], 00:38:43.731 | 99.00th=[ 50], 99.50th=[ 63], 99.90th=[ 78], 99.95th=[ 78], 00:38:43.731 | 99.99th=[ 104] 00:38:43.731 bw ( KiB/s): min= 1504, max= 1824, per=4.23%, avg=1708.63, stdev=75.72, samples=19 00:38:43.731 iops : min= 376, max= 456, avg=427.16, stdev=18.93, samples=19 00:38:43.731 lat (msec) : 20=0.47%, 50=98.88%, 100=0.61%, 250=0.05% 00:38:43.731 cpu : usr=98.34%, sys=1.27%, ctx=9, majf=0, minf=40 00:38:43.731 IO depths : 1=3.7%, 2=7.4%, 4=16.0%, 8=62.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:38:43.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 complete : 0=0.0%, 4=91.9%, 8=3.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.731 issued rwts: total=4282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.731 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.731 filename1: (groupid=0, jobs=1): err= 0: pid=464419: Wed Nov 6 12:45:13 2024 00:38:43.731 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10006msec) 00:38:43.731 slat (nsec): min=8474, max=67319, avg=33395.49, stdev=10995.85 00:38:43.731 clat (usec): min=22727, max=72876, avg=37867.95, stdev=2381.53 00:38:43.731 lat (usec): min=22742, max=72890, avg=37901.35, stdev=2380.77 00:38:43.731 clat percentiles (usec): 00:38:43.731 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.731 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[72877], 99.95th=[72877], 00:38:43.732 | 99.99th=[72877] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.732 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.732 cpu : usr=98.48%, sys=1.12%, ctx=13, majf=0, minf=31 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464420: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=419, BW=1680KiB/s (1720kB/s)(16.4MiB/10020msec) 00:38:43.732 slat (nsec): min=5966, max=77104, avg=34127.13, stdev=10652.37 00:38:43.732 clat (usec): min=22685, max=49417, avg=37815.80, stdev=1252.41 00:38:43.732 lat (usec): min=22720, max=49434, avg=37849.93, stdev=1251.39 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[49546], 99.95th=[49546], 00:38:43.732 | 99.99th=[49546] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.732 lat (msec) : 50=100.00% 00:38:43.732 cpu : usr=98.42%, sys=1.17%, ctx=14, majf=0, minf=28 00:38:43.732 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464421: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10006msec) 00:38:43.732 slat (nsec): min=8821, max=64604, avg=34171.51, stdev=10096.56 00:38:43.732 clat (usec): min=22690, max=77373, avg=37876.15, stdev=2415.47 00:38:43.732 lat (usec): min=22724, max=77389, avg=37910.32, stdev=2414.55 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[72877], 99.95th=[72877], 00:38:43.732 | 99.99th=[77071] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=49.84, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=12.46, samples=19 00:38:43.732 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.732 cpu : usr=98.39%, sys=1.20%, ctx=14, majf=0, minf=28 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464422: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=422, BW=1692KiB/s (1732kB/s)(16.6MiB/10026msec) 00:38:43.732 slat (usec): min=7, max=104, avg=39.81, stdev=17.57 00:38:43.732 clat (usec): min=12399, max=39873, avg=37453.22, stdev=2411.27 00:38:43.732 lat (usec): min=12416, max=39918, avg=37493.02, stdev=2413.71 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[19268], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.732 | 99.99th=[40109] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1923, per=4.18%, avg=1689.75, stdev=79.26, samples=20 00:38:43.732 iops : min= 384, max= 480, avg=422.40, stdev=19.70, samples=20 00:38:43.732 lat (msec) : 20=1.13%, 50=98.87% 00:38:43.732 cpu : usr=98.49%, sys=1.10%, ctx=10, majf=0, minf=46 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464423: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10006msec) 00:38:43.732 slat (nsec): min=6433, max=65397, avg=34375.16, stdev=10043.99 00:38:43.732 clat (usec): min=22818, max=72722, avg=37890.66, stdev=2369.80 00:38:43.732 lat (usec): min=22837, max=72741, avg=37925.04, stdev=2368.60 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[72877], 99.95th=[72877], 00:38:43.732 | 99.99th=[72877] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.732 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.732 cpu : usr=98.36%, sys=1.23%, ctx=11, majf=0, minf=41 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464424: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10006msec) 00:38:43.732 slat (nsec): min=5153, max=66243, avg=32742.77, stdev=10754.44 00:38:43.732 clat (usec): min=22773, max=72468, avg=37870.99, stdev=2357.84 00:38:43.732 lat (usec): min=22800, max=72486, avg=37903.73, stdev=2357.11 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[72877], 99.95th=[72877], 00:38:43.732 | 99.99th=[72877] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.732 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.732 cpu : usr=98.19%, sys=1.40%, ctx=13, majf=0, minf=37 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename1: (groupid=0, jobs=1): err= 0: pid=464425: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=420, BW=1682KiB/s (1722kB/s)(16.4MiB/10010msec) 00:38:43.732 slat (usec): min=9, max=103, avg=39.46, stdev=18.19 00:38:43.732 clat (usec): min=26741, max=39852, avg=37666.42, stdev=828.09 00:38:43.732 lat (usec): min=26751, max=39929, avg=37705.88, stdev=831.47 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[34341], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.732 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.732 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.732 | 99.99th=[40109] 00:38:43.732 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1677.47, stdev=40.36, samples=19 00:38:43.732 iops : min= 416, max= 448, avg=419.37, stdev=10.09, samples=19 00:38:43.732 lat (msec) : 50=100.00% 00:38:43.732 cpu : usr=98.29%, sys=1.30%, ctx=12, majf=0, minf=29 00:38:43.732 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename2: (groupid=0, jobs=1): err= 0: pid=464426: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10009msec) 00:38:43.732 slat (nsec): min=5193, max=42652, avg=18788.46, stdev=4955.53 00:38:43.732 clat (usec): min=22067, max=62927, avg=38040.64, stdev=3244.18 00:38:43.732 lat (usec): min=22084, max=62942, avg=38059.43, stdev=3243.97 00:38:43.732 clat percentiles (usec): 00:38:43.732 | 1.00th=[22414], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:38:43.732 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.732 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:38:43.732 | 99.00th=[53740], 99.50th=[56361], 99.90th=[62653], 99.95th=[62653], 00:38:43.732 | 99.99th=[63177] 00:38:43.732 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=52.07, samples=19 00:38:43.732 iops : min= 384, max= 448, avg=417.68, stdev=13.02, samples=19 00:38:43.732 lat (msec) : 50=98.00%, 100=2.00% 00:38:43.732 cpu : usr=98.45%, sys=1.14%, ctx=13, majf=0, minf=43 00:38:43.732 IO depths : 1=5.5%, 2=11.6%, 4=24.4%, 8=51.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:43.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.732 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.732 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.732 filename2: (groupid=0, jobs=1): err= 0: pid=464427: Wed Nov 6 12:45:13 2024 00:38:43.732 read: IOPS=425, BW=1703KiB/s (1744kB/s)(16.7MiB/10026msec) 00:38:43.732 slat (usec): min=7, max=100, avg=18.80, stdev= 7.45 00:38:43.732 clat (usec): min=20960, max=62549, avg=37428.10, stdev=3759.92 00:38:43.733 lat (usec): min=20970, max=62593, avg=37446.90, stdev=3761.49 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[21890], 5.00th=[28443], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[53216], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:38:43.733 | 99.99th=[62653] 00:38:43.733 bw ( KiB/s): min= 1648, max= 1888, per=4.21%, avg=1702.74, stdev=69.81, samples=19 00:38:43.733 iops : min= 412, max= 472, avg=425.68, stdev=17.45, samples=19 00:38:43.733 lat (msec) : 50=98.55%, 100=1.45% 00:38:43.733 cpu : usr=98.29%, sys=1.31%, ctx=13, majf=0, minf=46 00:38:43.733 IO depths : 1=5.4%, 2=11.3%, 4=23.7%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464428: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=420, BW=1681KiB/s (1721kB/s)(16.4MiB/10016msec) 00:38:43.733 slat (nsec): min=9671, max=60106, avg=17935.81, stdev=7414.16 00:38:43.733 clat (usec): min=24341, max=43960, avg=37937.17, stdev=982.71 00:38:43.733 lat (usec): min=24365, max=43979, avg=37955.11, stdev=982.01 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:38:43.733 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:38:43.733 | 99.99th=[43779] 00:38:43.733 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1677.63, stdev=40.31, samples=19 00:38:43.733 iops : min= 416, max= 448, avg=419.37, stdev=10.09, samples=19 00:38:43.733 lat (msec) : 50=100.00% 00:38:43.733 cpu : usr=98.36%, sys=1.25%, ctx=8, majf=0, minf=43 00:38:43.733 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464429: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=418, BW=1676KiB/s (1716kB/s)(16.4MiB/10007msec) 00:38:43.733 slat (nsec): min=4964, max=66989, avg=34767.93, stdev=10352.86 00:38:43.733 clat (usec): min=22685, max=74333, avg=37875.63, stdev=2470.41 00:38:43.733 lat (usec): min=22703, max=74347, avg=37910.40, stdev=2469.26 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[73925], 99.95th=[73925], 00:38:43.733 | 99.99th=[73925] 00:38:43.733 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.733 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.733 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.733 cpu : usr=98.29%, sys=1.26%, ctx=13, majf=0, minf=35 00:38:43.733 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464430: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=418, BW=1675KiB/s (1716kB/s)(16.4MiB/10008msec) 00:38:43.733 slat (nsec): min=4350, max=69592, avg=33561.65, stdev=10543.08 00:38:43.733 clat (usec): min=22758, max=74342, avg=37878.48, stdev=2463.21 00:38:43.733 lat (usec): min=22789, max=74359, avg=37912.04, stdev=2462.20 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[73925], 99.95th=[73925], 00:38:43.733 | 99.99th=[73925] 00:38:43.733 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.733 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.733 lat (msec) : 50=99.62%, 100=0.38% 00:38:43.733 cpu : usr=98.44%, sys=1.16%, ctx=13, majf=0, minf=42 00:38:43.733 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464431: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=420, BW=1680KiB/s (1720kB/s)(16.4MiB/10019msec) 00:38:43.733 slat (nsec): min=8000, max=76514, avg=31216.21, stdev=11498.13 00:38:43.733 clat (usec): min=22893, max=48542, avg=37856.24, stdev=1188.51 00:38:43.733 lat (usec): min=22934, max=48556, avg=37887.46, stdev=1187.12 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[48497], 99.95th=[48497], 00:38:43.733 | 99.99th=[48497] 00:38:43.733 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1670.74, stdev=51.80, samples=19 00:38:43.733 iops : min= 384, max= 448, avg=417.68, stdev=12.95, samples=19 00:38:43.733 lat (msec) : 50=100.00% 00:38:43.733 cpu : usr=98.45%, sys=1.15%, ctx=14, majf=0, minf=38 00:38:43.733 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464432: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=431, BW=1726KiB/s (1767kB/s)(16.9MiB/10013msec) 00:38:43.733 slat (nsec): min=3139, max=93156, avg=33586.41, stdev=18556.94 00:38:43.733 clat (usec): min=1729, max=39860, avg=36796.17, stdev=5565.73 00:38:43.733 lat (usec): min=1735, max=39888, avg=36829.76, stdev=5569.27 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[ 1942], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.733 | 99.99th=[40109] 00:38:43.733 bw ( KiB/s): min= 1664, max= 2560, per=4.26%, avg=1721.60, stdev=201.21, samples=20 00:38:43.733 iops : min= 416, max= 640, avg=430.40, stdev=50.30, samples=20 00:38:43.733 lat (msec) : 2=1.11%, 4=0.74%, 10=0.02%, 20=1.46%, 50=96.67% 00:38:43.733 cpu : usr=98.65%, sys=0.88%, ctx=47, majf=0, minf=57 00:38:43.733 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 filename2: (groupid=0, jobs=1): err= 0: pid=464433: Wed Nov 6 12:45:13 2024 00:38:43.733 read: IOPS=422, BW=1692KiB/s (1732kB/s)(16.6MiB/10026msec) 00:38:43.733 slat (nsec): min=3047, max=97383, avg=37360.83, stdev=17001.25 00:38:43.733 clat (usec): min=12478, max=39886, avg=37482.57, stdev=2373.39 00:38:43.733 lat (usec): min=12492, max=39939, avg=37519.93, stdev=2375.72 00:38:43.733 clat percentiles (usec): 00:38:43.733 | 1.00th=[19268], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:38:43.733 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[38011], 00:38:43.733 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:38:43.733 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:38:43.733 | 99.99th=[40109] 00:38:43.733 bw ( KiB/s): min= 1536, max= 1920, per=4.18%, avg=1689.60, stdev=78.80, samples=20 00:38:43.733 iops : min= 384, max= 480, avg=422.40, stdev=19.70, samples=20 00:38:43.733 lat (msec) : 20=1.13%, 50=98.87% 00:38:43.733 cpu : usr=98.58%, sys=1.07%, ctx=12, majf=0, minf=41 00:38:43.733 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:43.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:43.733 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:43.733 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:43.733 00:38:43.733 Run status group 0 (all jobs): 00:38:43.733 READ: bw=39.4MiB/s (41.4MB/s), 1675KiB/s-1726KiB/s (1715kB/s-1767kB/s), io=395MiB (415MB), run=10004-10026msec 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.733 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 bdev_null0 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 [2024-11-06 12:45:14.182012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 bdev_null1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.734 { 00:38:43.734 "params": { 00:38:43.734 "name": "Nvme$subsystem", 00:38:43.734 "trtype": "$TEST_TRANSPORT", 00:38:43.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.734 "adrfam": "ipv4", 00:38:43.734 "trsvcid": "$NVMF_PORT", 00:38:43.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.734 "hdgst": ${hdgst:-false}, 00:38:43.734 "ddgst": ${ddgst:-false} 00:38:43.734 }, 00:38:43.734 "method": "bdev_nvme_attach_controller" 00:38:43.734 } 00:38:43.734 EOF 00:38:43.734 )") 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.734 { 00:38:43.734 "params": { 00:38:43.734 "name": "Nvme$subsystem", 00:38:43.734 "trtype": "$TEST_TRANSPORT", 00:38:43.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.734 "adrfam": "ipv4", 00:38:43.734 "trsvcid": "$NVMF_PORT", 00:38:43.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.734 "hdgst": ${hdgst:-false}, 00:38:43.734 "ddgst": ${ddgst:-false} 00:38:43.734 }, 00:38:43.734 "method": "bdev_nvme_attach_controller" 00:38:43.734 } 00:38:43.734 EOF 00:38:43.734 )") 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:43.734 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.735 "params": { 00:38:43.735 "name": "Nvme0", 00:38:43.735 "trtype": "tcp", 00:38:43.735 "traddr": "10.0.0.2", 00:38:43.735 "adrfam": "ipv4", 00:38:43.735 "trsvcid": "4420", 00:38:43.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.735 "hdgst": false, 00:38:43.735 "ddgst": false 00:38:43.735 }, 00:38:43.735 "method": "bdev_nvme_attach_controller" 00:38:43.735 },{ 00:38:43.735 "params": { 00:38:43.735 "name": "Nvme1", 00:38:43.735 "trtype": "tcp", 00:38:43.735 "traddr": "10.0.0.2", 00:38:43.735 "adrfam": "ipv4", 00:38:43.735 "trsvcid": "4420", 00:38:43.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.735 "hdgst": false, 00:38:43.735 "ddgst": false 00:38:43.735 }, 00:38:43.735 "method": "bdev_nvme_attach_controller" 00:38:43.735 }' 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:43.735 12:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:43.735 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:43.735 ... 00:38:43.735 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:43.735 ... 00:38:43.735 fio-3.35 00:38:43.735 Starting 4 threads 00:38:48.998 00:38:48.998 filename0: (groupid=0, jobs=1): err= 0: pid=467026: Wed Nov 6 12:45:20 2024 00:38:48.998 read: IOPS=1751, BW=13.7MiB/s (14.3MB/s)(68.5MiB/5004msec) 00:38:48.998 slat (nsec): min=9171, max=47339, avg=15936.27, stdev=4958.84 00:38:48.998 clat (usec): min=823, max=8204, avg=4509.91, stdev=372.95 00:38:48.998 lat (usec): min=839, max=8219, avg=4525.85, stdev=372.96 00:38:48.998 clat percentiles (usec): 00:38:48.998 | 1.00th=[ 3425], 5.00th=[ 4113], 10.00th=[ 4359], 20.00th=[ 4424], 00:38:48.998 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:38:48.998 | 70.00th=[ 4555], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4752], 00:38:48.998 | 99.00th=[ 5669], 99.50th=[ 6521], 99.90th=[ 7635], 99.95th=[ 7898], 00:38:48.998 | 99.99th=[ 8225] 00:38:48.998 bw ( KiB/s): min=13936, max=14256, per=25.08%, avg=14014.40, stdev=99.62, samples=10 00:38:48.998 iops : min= 1742, max= 1782, avg=1751.80, stdev=12.45, samples=10 00:38:48.998 lat (usec) : 1000=0.03% 00:38:48.998 lat (msec) : 2=0.19%, 4=3.83%, 10=95.94% 00:38:48.998 cpu : usr=95.76%, sys=3.80%, ctx=16, majf=0, minf=9 00:38:48.998 IO depths : 1=0.5%, 2=20.5%, 4=53.7%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 issued rwts: total=8765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:48.998 filename0: (groupid=0, jobs=1): err= 0: pid=467027: Wed Nov 6 12:45:20 2024 00:38:48.998 read: IOPS=1742, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5001msec) 00:38:48.998 slat (nsec): min=6838, max=47789, avg=16876.95, stdev=4682.95 00:38:48.998 clat (usec): min=851, max=8374, avg=4529.67, stdev=475.42 00:38:48.998 lat (usec): min=866, max=8389, avg=4546.55, stdev=475.31 00:38:48.998 clat percentiles (usec): 00:38:48.998 | 1.00th=[ 3130], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4424], 00:38:48.998 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:38:48.998 | 70.00th=[ 4555], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4817], 00:38:48.998 | 99.00th=[ 6652], 99.50th=[ 7504], 99.90th=[ 8094], 99.95th=[ 8225], 00:38:48.998 | 99.99th=[ 8356] 00:38:48.998 bw ( KiB/s): min=13616, max=14080, per=24.98%, avg=13955.56, stdev=140.58, samples=9 00:38:48.998 iops : min= 1702, max= 1760, avg=1744.44, stdev=17.57, samples=9 00:38:48.998 lat (usec) : 1000=0.11% 00:38:48.998 lat (msec) : 2=0.50%, 4=2.64%, 10=96.74% 00:38:48.998 cpu : usr=95.70%, sys=3.86%, ctx=13, majf=0, minf=9 00:38:48.998 IO depths : 1=0.7%, 2=20.3%, 4=54.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 issued rwts: total=8713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:48.998 filename1: (groupid=0, jobs=1): err= 0: pid=467028: Wed Nov 6 12:45:20 2024 00:38:48.998 read: IOPS=1750, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5003msec) 00:38:48.998 slat (nsec): min=5441, max=39958, avg=10895.82, stdev=4289.58 00:38:48.998 clat (usec): min=1207, max=8456, avg=4534.51, stdev=350.90 00:38:48.998 lat (usec): min=1212, max=8461, avg=4545.40, stdev=351.03 00:38:48.998 clat percentiles (usec): 00:38:48.998 | 1.00th=[ 3458], 5.00th=[ 4146], 10.00th=[ 4424], 20.00th=[ 4490], 00:38:48.998 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4555], 00:38:48.998 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4621], 95.00th=[ 4817], 00:38:48.998 | 99.00th=[ 5604], 99.50th=[ 6521], 99.90th=[ 8160], 99.95th=[ 8356], 00:38:48.998 | 99.99th=[ 8455] 00:38:48.998 bw ( KiB/s): min=13728, max=14192, per=25.06%, avg=14001.60, stdev=137.32, samples=10 00:38:48.998 iops : min= 1716, max= 1774, avg=1750.20, stdev=17.16, samples=10 00:38:48.998 lat (msec) : 2=0.06%, 4=3.78%, 10=96.16% 00:38:48.998 cpu : usr=95.90%, sys=3.72%, ctx=50, majf=0, minf=9 00:38:48.998 IO depths : 1=0.2%, 2=9.7%, 4=63.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.998 issued rwts: total=8759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:48.998 filename1: (groupid=0, jobs=1): err= 0: pid=467029: Wed Nov 6 12:45:20 2024 00:38:48.998 read: IOPS=1741, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5001msec) 00:38:48.998 slat (nsec): min=7979, max=54310, avg=16739.19, stdev=5554.79 00:38:48.998 clat (usec): min=842, max=8593, avg=4527.04, stdev=516.28 00:38:48.998 lat (usec): min=853, max=8610, avg=4543.78, stdev=516.28 00:38:48.998 clat percentiles (usec): 00:38:48.998 | 1.00th=[ 2966], 5.00th=[ 4113], 10.00th=[ 4424], 20.00th=[ 4424], 00:38:48.998 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:38:48.998 | 70.00th=[ 4555], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 5014], 00:38:48.998 | 99.00th=[ 6718], 99.50th=[ 7504], 99.90th=[ 8225], 99.95th=[ 8225], 00:38:48.998 | 99.99th=[ 8586] 00:38:48.999 bw ( KiB/s): min=13712, max=14080, per=24.97%, avg=13953.22, stdev=104.16, samples=9 00:38:48.999 iops : min= 1714, max= 1760, avg=1744.11, stdev=13.02, samples=9 00:38:48.999 lat (usec) : 1000=0.21% 00:38:48.999 lat (msec) : 2=0.54%, 4=3.28%, 10=95.97% 00:38:48.999 cpu : usr=93.74%, sys=4.46%, ctx=311, majf=0, minf=9 00:38:48.999 IO depths : 1=1.1%, 2=21.1%, 4=52.4%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:48.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.999 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:48.999 issued rwts: total=8711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:48.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:48.999 00:38:48.999 Run status group 0 (all jobs): 00:38:48.999 READ: bw=54.6MiB/s (57.2MB/s), 13.6MiB/s-13.7MiB/s (14.3MB/s-14.3MB/s), io=273MiB (286MB), run=5001-5004msec 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 00:38:49.257 real 0m24.868s 00:38:49.257 user 5m5.270s 00:38:49.257 sys 0m5.657s 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 ************************************ 00:38:49.257 END TEST fio_dif_rand_params 00:38:49.257 ************************************ 00:38:49.257 12:45:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:49.257 12:45:20 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:49.257 12:45:20 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 ************************************ 00:38:49.257 START TEST fio_dif_digest 00:38:49.257 ************************************ 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 bdev_null0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.257 [2024-11-06 12:45:20.848980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.257 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:49.258 { 00:38:49.258 "params": { 00:38:49.258 "name": "Nvme$subsystem", 00:38:49.258 "trtype": "$TEST_TRANSPORT", 00:38:49.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:49.258 "adrfam": "ipv4", 00:38:49.258 "trsvcid": "$NVMF_PORT", 00:38:49.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:49.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:49.258 "hdgst": ${hdgst:-false}, 00:38:49.258 "ddgst": ${ddgst:-false} 00:38:49.258 }, 00:38:49.258 "method": "bdev_nvme_attach_controller" 00:38:49.258 } 00:38:49.258 EOF 00:38:49.258 )") 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:49.258 12:45:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:49.258 "params": { 00:38:49.258 "name": "Nvme0", 00:38:49.258 "trtype": "tcp", 00:38:49.258 "traddr": "10.0.0.2", 00:38:49.258 "adrfam": "ipv4", 00:38:49.258 "trsvcid": "4420", 00:38:49.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.258 "hdgst": true, 00:38:49.258 "ddgst": true 00:38:49.258 }, 00:38:49.258 "method": "bdev_nvme_attach_controller" 00:38:49.258 }' 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:49.516 12:45:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.773 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:49.773 ... 00:38:49.773 fio-3.35 00:38:49.773 Starting 3 threads 00:39:01.965 00:39:01.965 filename0: (groupid=0, jobs=1): err= 0: pid=468239: Wed Nov 6 12:45:31 2024 00:39:01.965 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10049msec) 00:39:01.965 slat (nsec): min=9713, max=34458, avg=24089.35, stdev=5467.68 00:39:01.965 clat (usec): min=11895, max=54802, avg=14978.66, stdev=1575.17 00:39:01.965 lat (usec): min=11914, max=54830, avg=15002.75, stdev=1575.17 00:39:01.965 clat percentiles (usec): 00:39:01.965 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:39:01.965 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:39:01.965 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:39:01.965 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[51643], 00:39:01.965 | 99.99th=[54789] 00:39:01.965 bw ( KiB/s): min=24832, max=26368, per=35.73%, avg=25651.20, stdev=474.23, samples=20 00:39:01.965 iops : min= 194, max= 206, avg=200.40, stdev= 3.70, samples=20 00:39:01.965 lat (msec) : 20=99.90%, 100=0.10% 00:39:01.965 cpu : usr=95.28%, sys=4.34%, ctx=71, majf=0, minf=67 00:39:01.965 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.965 filename0: (groupid=0, jobs=1): err= 0: pid=468240: Wed Nov 6 12:45:31 2024 00:39:01.965 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10046msec) 00:39:01.965 slat (nsec): min=10089, max=52011, avg=23090.08, stdev=7800.02 00:39:01.965 clat (usec): min=11870, max=55433, avg=16312.25, stdev=1592.64 00:39:01.965 lat (usec): min=11901, max=55450, avg=16335.34, stdev=1592.68 00:39:01.965 clat percentiles (usec): 00:39:01.965 | 1.00th=[13960], 5.00th=[14615], 10.00th=[15008], 20.00th=[15401], 00:39:01.965 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16188], 60.00th=[16450], 00:39:01.965 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:39:01.965 | 99.00th=[19268], 99.50th=[19792], 99.90th=[49021], 99.95th=[55313], 00:39:01.965 | 99.99th=[55313] 00:39:01.965 bw ( KiB/s): min=22784, max=24320, per=32.81%, avg=23552.00, stdev=389.57, samples=20 00:39:01.965 iops : min= 178, max= 190, avg=184.00, stdev= 3.04, samples=20 00:39:01.965 lat (msec) : 20=99.67%, 50=0.27%, 100=0.05% 00:39:01.965 cpu : usr=96.34%, sys=3.30%, ctx=17, majf=0, minf=97 00:39:01.965 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.965 filename0: (groupid=0, jobs=1): err= 0: pid=468241: Wed Nov 6 12:45:31 2024 00:39:01.965 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(224MiB/10046msec) 00:39:01.965 slat (nsec): min=9893, max=51385, avg=22368.91, stdev=7536.76 00:39:01.965 clat (usec): min=13795, max=56020, avg=16806.29, stdev=1548.02 00:39:01.965 lat (usec): min=13810, max=56050, avg=16828.66, stdev=1548.50 00:39:01.965 clat percentiles (usec): 00:39:01.965 | 1.00th=[14484], 5.00th=[15401], 10.00th=[15664], 20.00th=[16057], 00:39:01.965 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:39:01.965 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:39:01.965 | 99.00th=[19530], 99.50th=[19792], 99.90th=[49546], 99.95th=[55837], 00:39:01.965 | 99.99th=[55837] 00:39:01.965 bw ( KiB/s): min=22016, max=23552, per=31.84%, avg=22860.80, stdev=448.05, samples=20 00:39:01.965 iops : min= 172, max= 184, avg=178.60, stdev= 3.50, samples=20 00:39:01.965 lat (msec) : 20=99.61%, 50=0.34%, 100=0.06% 00:39:01.965 cpu : usr=96.67%, sys=2.97%, ctx=16, majf=0, minf=41 00:39:01.965 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.965 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.965 00:39:01.965 Run status group 0 (all jobs): 00:39:01.965 READ: bw=70.1MiB/s (73.5MB/s), 22.2MiB/s-25.0MiB/s (23.3MB/s-26.2MB/s), io=705MiB (739MB), run=10046-10049msec 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.965 00:39:01.965 real 0m11.333s 00:39:01.965 user 0m40.877s 00:39:01.965 sys 0m1.402s 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:01.965 12:45:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:01.965 ************************************ 00:39:01.965 END TEST fio_dif_digest 00:39:01.965 ************************************ 00:39:01.965 12:45:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:01.965 12:45:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.965 rmmod nvme_tcp 00:39:01.965 rmmod nvme_fabrics 00:39:01.965 rmmod nvme_keyring 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 458427 ']' 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 458427 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 458427 ']' 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 458427 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 458427 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 458427' 00:39:01.965 killing process with pid 458427 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@971 -- # kill 458427 00:39:01.965 12:45:32 nvmf_dif -- common/autotest_common.sh@976 -- # wait 458427 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:01.965 12:45:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:03.339 Waiting for block devices as requested 00:39:03.597 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:39:03.597 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:03.856 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:03.856 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:03.856 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:03.856 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:04.114 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:04.114 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:04.114 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:04.372 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:04.372 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:04.372 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:04.630 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:04.630 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:04.630 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:04.630 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:04.888 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:04.888 12:45:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.888 12:45:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:04.888 12:45:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.417 12:45:38 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:07.417 00:39:07.417 real 1m13.923s 00:39:07.417 user 7m38.035s 00:39:07.417 sys 0m20.161s 00:39:07.417 12:45:38 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:07.417 12:45:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:07.417 ************************************ 00:39:07.417 END TEST nvmf_dif 00:39:07.417 ************************************ 00:39:07.417 12:45:38 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:07.417 12:45:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:07.417 12:45:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:07.417 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:39:07.417 ************************************ 00:39:07.417 START TEST nvmf_abort_qd_sizes 00:39:07.417 ************************************ 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:07.417 * Looking for test storage... 00:39:07.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.417 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:07.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.418 --rc genhtml_branch_coverage=1 00:39:07.418 --rc genhtml_function_coverage=1 00:39:07.418 --rc genhtml_legend=1 00:39:07.418 --rc geninfo_all_blocks=1 00:39:07.418 --rc geninfo_unexecuted_blocks=1 00:39:07.418 00:39:07.418 ' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:07.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.418 --rc genhtml_branch_coverage=1 00:39:07.418 --rc genhtml_function_coverage=1 00:39:07.418 --rc genhtml_legend=1 00:39:07.418 --rc geninfo_all_blocks=1 00:39:07.418 --rc geninfo_unexecuted_blocks=1 00:39:07.418 00:39:07.418 ' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:07.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.418 --rc genhtml_branch_coverage=1 00:39:07.418 --rc genhtml_function_coverage=1 00:39:07.418 --rc genhtml_legend=1 00:39:07.418 --rc geninfo_all_blocks=1 00:39:07.418 --rc geninfo_unexecuted_blocks=1 00:39:07.418 00:39:07.418 ' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:07.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.418 --rc genhtml_branch_coverage=1 00:39:07.418 --rc genhtml_function_coverage=1 00:39:07.418 --rc genhtml_legend=1 00:39:07.418 --rc geninfo_all_blocks=1 00:39:07.418 --rc geninfo_unexecuted_blocks=1 00:39:07.418 00:39:07.418 ' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:07.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:07.418 12:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:12.681 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:12.682 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:12.682 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:12.682 Found net devices under 0000:af:00.0: cvl_0_0 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:12.682 Found net devices under 0000:af:00.1: cvl_0_1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:12.682 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:12.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:12.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:39:12.940 00:39:12.940 --- 10.0.0.2 ping statistics --- 00:39:12.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.940 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:12.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:12.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:39:12.940 00:39:12.940 --- 10.0.0.1 ping statistics --- 00:39:12.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.940 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:12.940 12:45:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:15.494 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:15.494 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:16.427 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:16.427 12:45:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=476521 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 476521 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 476521 ']' 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:16.427 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:16.685 [2024-11-06 12:45:48.073822] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:39:16.685 [2024-11-06 12:45:48.073878] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.685 [2024-11-06 12:45:48.174593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:16.685 [2024-11-06 12:45:48.225945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.685 [2024-11-06 12:45:48.225987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.685 [2024-11-06 12:45:48.225998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.685 [2024-11-06 12:45:48.226007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.685 [2024-11-06 12:45:48.226014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.685 [2024-11-06 12:45:48.228076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.685 [2024-11-06 12:45:48.228176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:16.685 [2024-11-06 12:45:48.228283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:16.685 [2024-11-06 12:45:48.228284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:86:00.0 ]] 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:86:00.0 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:16.943 12:45:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:16.943 ************************************ 00:39:16.943 START TEST spdk_target_abort 00:39:16.943 ************************************ 00:39:16.943 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:39:16.943 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:16.943 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:39:16.943 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.943 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.222 spdk_targetn1 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.222 [2024-11-06 12:45:51.276223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.222 [2024-11-06 12:45:51.317651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.222 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.502 Initializing NVMe Controllers 00:39:23.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.502 Initialization complete. Launching workers. 00:39:23.502 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14457, failed: 0 00:39:23.502 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1444, failed to submit 13013 00:39:23.502 success 741, unsuccessful 703, failed 0 00:39:23.502 12:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:23.502 12:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:26.785 Initializing NVMe Controllers 00:39:26.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:26.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:26.785 Initialization complete. Launching workers. 00:39:26.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8619, failed: 0 00:39:26.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7405 00:39:26.785 success 344, unsuccessful 870, failed 0 00:39:26.785 12:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:26.785 12:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:30.069 Initializing NVMe Controllers 00:39:30.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:30.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:30.069 Initialization complete. Launching workers. 00:39:30.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38316, failed: 0 00:39:30.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2594, failed to submit 35722 00:39:30.069 success 584, unsuccessful 2010, failed 0 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.069 12:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 476521 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 476521 ']' 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 476521 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 476521 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 476521' 00:39:31.003 killing process with pid 476521 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 476521 00:39:31.003 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 476521 00:39:31.261 00:39:31.261 real 0m14.310s 00:39:31.261 user 0m54.703s 00:39:31.261 sys 0m2.462s 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:31.261 ************************************ 00:39:31.261 END TEST spdk_target_abort 00:39:31.261 ************************************ 00:39:31.261 12:46:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:31.261 12:46:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:31.261 12:46:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:31.261 12:46:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:31.261 ************************************ 00:39:31.261 START TEST kernel_target_abort 00:39:31.261 ************************************ 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:31.261 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:31.262 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:31.262 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:31.262 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:31.262 12:46:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:33.790 Waiting for block devices as requested 00:39:34.049 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:39:34.049 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:34.307 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:34.307 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:34.307 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:34.565 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:34.565 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:34.565 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:34.565 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:34.823 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:34.823 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:34.823 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:35.081 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:35.081 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:35.081 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:35.339 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:35.339 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:35.339 No valid GPT data, bailing 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:35.339 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:35.597 12:46:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:39:35.597 00:39:35.597 Discovery Log Number of Records 2, Generation counter 2 00:39:35.597 =====Discovery Log Entry 0====== 00:39:35.597 trtype: tcp 00:39:35.597 adrfam: ipv4 00:39:35.597 subtype: current discovery subsystem 00:39:35.597 treq: not specified, sq flow control disable supported 00:39:35.597 portid: 1 00:39:35.597 trsvcid: 4420 00:39:35.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:35.597 traddr: 10.0.0.1 00:39:35.597 eflags: none 00:39:35.597 sectype: none 00:39:35.597 =====Discovery Log Entry 1====== 00:39:35.597 trtype: tcp 00:39:35.597 adrfam: ipv4 00:39:35.597 subtype: nvme subsystem 00:39:35.597 treq: not specified, sq flow control disable supported 00:39:35.597 portid: 1 00:39:35.597 trsvcid: 4420 00:39:35.597 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:35.597 traddr: 10.0.0.1 00:39:35.597 eflags: none 00:39:35.597 sectype: none 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:35.597 12:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:38.881 Initializing NVMe Controllers 00:39:38.881 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:38.881 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:38.881 Initialization complete. Launching workers. 00:39:38.881 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49114, failed: 0 00:39:38.881 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 49114, failed to submit 0 00:39:38.881 success 0, unsuccessful 49114, failed 0 00:39:38.881 12:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:38.881 12:46:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:42.160 Initializing NVMe Controllers 00:39:42.160 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:42.160 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:42.160 Initialization complete. Launching workers. 00:39:42.160 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84467, failed: 0 00:39:42.160 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19342, failed to submit 65125 00:39:42.160 success 0, unsuccessful 19342, failed 0 00:39:42.160 12:46:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:42.160 12:46:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:45.441 Initializing NVMe Controllers 00:39:45.441 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:45.441 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:45.441 Initialization complete. Launching workers. 00:39:45.441 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78493, failed: 0 00:39:45.441 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19574, failed to submit 58919 00:39:45.441 success 0, unsuccessful 19574, failed 0 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:45.441 12:46:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:48.005 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:48.005 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:48.573 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:39:48.830 00:39:48.830 real 0m17.423s 00:39:48.830 user 0m8.286s 00:39:48.830 sys 0m5.134s 00:39:48.830 12:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:48.830 12:46:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.830 ************************************ 00:39:48.830 END TEST kernel_target_abort 00:39:48.830 ************************************ 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.830 rmmod nvme_tcp 00:39:48.830 rmmod nvme_fabrics 00:39:48.830 rmmod nvme_keyring 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 476521 ']' 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 476521 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 476521 ']' 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 476521 00:39:48.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (476521) - No such process 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 476521 is not found' 00:39:48.830 Process with pid 476521 is not found 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:48.830 12:46:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:51.362 Waiting for block devices as requested 00:39:51.362 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:39:51.619 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:51.619 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:51.877 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:51.877 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:51.877 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:51.877 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:52.135 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:52.135 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:52.135 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:52.393 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:52.393 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:52.393 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:52.393 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:52.651 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:52.651 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:52.651 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:52.910 12:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.813 12:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.813 00:39:54.813 real 0m47.804s 00:39:54.813 user 1m7.056s 00:39:54.813 sys 0m15.901s 00:39:54.813 12:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:54.813 12:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:54.813 ************************************ 00:39:54.813 END TEST nvmf_abort_qd_sizes 00:39:54.813 ************************************ 00:39:54.813 12:46:26 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:54.813 12:46:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:54.813 12:46:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:54.813 12:46:26 -- common/autotest_common.sh@10 -- # set +x 00:39:54.813 ************************************ 00:39:54.813 START TEST keyring_file 00:39:54.813 ************************************ 00:39:54.813 12:46:26 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:55.072 * Looking for test storage... 00:39:55.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:55.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.072 --rc genhtml_branch_coverage=1 00:39:55.072 --rc genhtml_function_coverage=1 00:39:55.072 --rc genhtml_legend=1 00:39:55.072 --rc geninfo_all_blocks=1 00:39:55.072 --rc geninfo_unexecuted_blocks=1 00:39:55.072 00:39:55.072 ' 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:55.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.072 --rc genhtml_branch_coverage=1 00:39:55.072 --rc genhtml_function_coverage=1 00:39:55.072 --rc genhtml_legend=1 00:39:55.072 --rc geninfo_all_blocks=1 00:39:55.072 --rc geninfo_unexecuted_blocks=1 00:39:55.072 00:39:55.072 ' 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:55.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.072 --rc genhtml_branch_coverage=1 00:39:55.072 --rc genhtml_function_coverage=1 00:39:55.072 --rc genhtml_legend=1 00:39:55.072 --rc geninfo_all_blocks=1 00:39:55.072 --rc geninfo_unexecuted_blocks=1 00:39:55.072 00:39:55.072 ' 00:39:55.072 12:46:26 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:55.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.072 --rc genhtml_branch_coverage=1 00:39:55.072 --rc genhtml_function_coverage=1 00:39:55.072 --rc genhtml_legend=1 00:39:55.072 --rc geninfo_all_blocks=1 00:39:55.072 --rc geninfo_unexecuted_blocks=1 00:39:55.072 00:39:55.072 ' 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:55.072 12:46:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:55.072 12:46:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.072 12:46:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.072 12:46:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.072 12:46:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:55.072 12:46:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:55.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:55.072 12:46:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BcXwFcJoAF 00:39:55.072 12:46:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:55.072 12:46:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BcXwFcJoAF 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BcXwFcJoAF 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BcXwFcJoAF 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.N2rJ7ZPO28 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:55.331 12:46:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.N2rJ7ZPO28 00:39:55.331 12:46:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.N2rJ7ZPO28 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.N2rJ7ZPO28 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=485732 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 485732 00:39:55.331 12:46:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 485732 ']' 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:55.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:55.331 12:46:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:55.331 [2024-11-06 12:46:26.820237] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:39:55.331 [2024-11-06 12:46:26.820304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485732 ] 00:39:55.331 [2024-11-06 12:46:26.916098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.589 [2024-11-06 12:46:26.966417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.589 12:46:27 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:55.589 12:46:27 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:55.589 12:46:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:55.589 12:46:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.589 12:46:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:55.589 [2024-11-06 12:46:27.193584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.848 null0 00:39:55.848 [2024-11-06 12:46:27.225620] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:55.848 [2024-11-06 12:46:27.226011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:55.848 12:46:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:55.848 [2024-11-06 12:46:27.249670] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:55.848 request: 00:39:55.848 { 00:39:55.848 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.848 "secure_channel": false, 00:39:55.848 "listen_address": { 00:39:55.848 "trtype": "tcp", 00:39:55.848 "traddr": "127.0.0.1", 00:39:55.848 "trsvcid": "4420" 00:39:55.848 }, 00:39:55.848 "method": "nvmf_subsystem_add_listener", 00:39:55.848 "req_id": 1 00:39:55.848 } 00:39:55.848 Got JSON-RPC error response 00:39:55.848 response: 00:39:55.848 { 00:39:55.848 "code": -32602, 00:39:55.848 "message": "Invalid parameters" 00:39:55.848 } 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:55.848 12:46:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=485765 00:39:55.848 12:46:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 485765 /var/tmp/bperf.sock 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 485765 ']' 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:55.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:55.848 12:46:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:55.848 12:46:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:55.848 [2024-11-06 12:46:27.298524] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:39:55.848 [2024-11-06 12:46:27.298564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485765 ] 00:39:55.848 [2024-11-06 12:46:27.355118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.848 [2024-11-06 12:46:27.395944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.106 12:46:27 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:56.106 12:46:27 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:56.106 12:46:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:39:56.106 12:46:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:39:56.364 12:46:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N2rJ7ZPO28 00:39:56.364 12:46:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N2rJ7ZPO28 00:39:56.364 12:46:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:56.364 12:46:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:56.364 12:46:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.364 12:46:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.364 12:46:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.931 12:46:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BcXwFcJoAF == \/\t\m\p\/\t\m\p\.\B\c\X\w\F\c\J\o\A\F ]] 00:39:56.931 12:46:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:56.931 12:46:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.931 12:46:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.N2rJ7ZPO28 == \/\t\m\p\/\t\m\p\.\N\2\r\J\7\Z\P\O\2\8 ]] 00:39:56.931 12:46:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.931 12:46:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.190 12:46:28 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:57.190 12:46:28 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:57.190 12:46:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:57.190 12:46:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:57.190 12:46:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:57.190 12:46:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:57.190 12:46:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.757 12:46:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:57.757 12:46:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.757 [2024-11-06 12:46:29.234805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:57.757 nvme0n1 00:39:57.757 12:46:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:57.757 12:46:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:58.016 12:46:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:58.016 12:46:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:58.016 12:46:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:58.016 12:46:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:58.016 12:46:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:58.016 12:46:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:58.016 12:46:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:58.274 12:46:29 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:58.274 12:46:29 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:58.532 Running I/O for 1 seconds... 00:39:59.468 13416.00 IOPS, 52.41 MiB/s 00:39:59.468 Latency(us) 00:39:59.468 [2024-11-06T11:46:31.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.468 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:59.468 nvme0n1 : 1.01 13416.16 52.41 0.00 0.00 9494.11 3530.01 12153.95 00:39:59.468 [2024-11-06T11:46:31.083Z] =================================================================================================================== 00:39:59.468 [2024-11-06T11:46:31.083Z] Total : 13416.16 52.41 0.00 0.00 9494.11 3530.01 12153.95 00:39:59.468 { 00:39:59.468 "results": [ 00:39:59.468 { 00:39:59.468 "job": "nvme0n1", 00:39:59.468 "core_mask": "0x2", 00:39:59.468 "workload": "randrw", 00:39:59.468 "percentage": 50, 00:39:59.468 "status": "finished", 00:39:59.468 "queue_depth": 128, 00:39:59.468 "io_size": 4096, 00:39:59.468 "runtime": 1.009529, 00:39:59.468 "iops": 13416.157435794315, 00:39:59.468 "mibps": 52.406864983571545, 00:39:59.468 "io_failed": 0, 00:39:59.468 "io_timeout": 0, 00:39:59.468 "avg_latency_us": 9494.107028942706, 00:39:59.468 "min_latency_us": 3530.0072727272727, 00:39:59.468 "max_latency_us": 12153.949090909091 00:39:59.468 } 00:39:59.468 ], 00:39:59.468 "core_count": 1 00:39:59.468 } 00:39:59.468 12:46:30 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:59.468 12:46:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:59.727 12:46:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:59.727 12:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:59.727 12:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:59.727 12:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:59.727 12:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:59.727 12:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.986 12:46:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:59.986 12:46:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:59.986 12:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:59.986 12:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:59.986 12:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:59.986 12:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.986 12:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:00.245 12:46:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:00.245 12:46:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:00.245 12:46:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:00.245 12:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:00.504 [2024-11-06 12:46:31.961338] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:00.504 [2024-11-06 12:46:31.962033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b326d0 (107): Transport endpoint is not connected 00:40:00.504 [2024-11-06 12:46:31.963026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b326d0 (9): Bad file descriptor 00:40:00.505 [2024-11-06 12:46:31.964028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:00.505 [2024-11-06 12:46:31.964039] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:00.505 [2024-11-06 12:46:31.964046] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:00.505 [2024-11-06 12:46:31.964053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:00.505 request: 00:40:00.505 { 00:40:00.505 "name": "nvme0", 00:40:00.505 "trtype": "tcp", 00:40:00.505 "traddr": "127.0.0.1", 00:40:00.505 "adrfam": "ipv4", 00:40:00.505 "trsvcid": "4420", 00:40:00.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:00.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:00.505 "prchk_reftag": false, 00:40:00.505 "prchk_guard": false, 00:40:00.505 "hdgst": false, 00:40:00.505 "ddgst": false, 00:40:00.505 "psk": "key1", 00:40:00.505 "allow_unrecognized_csi": false, 00:40:00.505 "method": "bdev_nvme_attach_controller", 00:40:00.505 "req_id": 1 00:40:00.505 } 00:40:00.505 Got JSON-RPC error response 00:40:00.505 response: 00:40:00.505 { 00:40:00.505 "code": -5, 00:40:00.505 "message": "Input/output error" 00:40:00.505 } 00:40:00.505 12:46:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:00.505 12:46:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:00.505 12:46:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:00.505 12:46:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:00.505 12:46:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:00.505 12:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:00.505 12:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.505 12:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.505 12:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:00.505 12:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:00.763 12:46:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:00.763 12:46:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:00.763 12:46:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:00.763 12:46:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:00.763 12:46:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:00.763 12:46:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:00.763 12:46:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.021 12:46:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:01.022 12:46:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:01.022 12:46:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:01.280 12:46:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:01.280 12:46:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:01.539 12:46:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:01.539 12:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.539 12:46:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:01.798 12:46:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:01.798 12:46:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BcXwFcJoAF 00:40:01.798 12:46:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.798 12:46:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:01.798 12:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:02.057 [2024-11-06 12:46:33.617309] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BcXwFcJoAF': 0100660 00:40:02.057 [2024-11-06 12:46:33.617336] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:02.057 request: 00:40:02.057 { 00:40:02.057 "name": "key0", 00:40:02.057 "path": "/tmp/tmp.BcXwFcJoAF", 00:40:02.057 "method": "keyring_file_add_key", 00:40:02.057 "req_id": 1 00:40:02.057 } 00:40:02.057 Got JSON-RPC error response 00:40:02.057 response: 00:40:02.057 { 00:40:02.057 "code": -1, 00:40:02.057 "message": "Operation not permitted" 00:40:02.057 } 00:40:02.057 12:46:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:02.057 12:46:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:02.057 12:46:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:02.057 12:46:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:02.057 12:46:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BcXwFcJoAF 00:40:02.057 12:46:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:02.057 12:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BcXwFcJoAF 00:40:02.316 12:46:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BcXwFcJoAF 00:40:02.316 12:46:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:02.316 12:46:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:02.316 12:46:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:02.316 12:46:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:02.316 12:46:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:02.316 12:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:02.575 12:46:34 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:02.575 12:46:34 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:02.575 12:46:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:02.575 12:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:02.834 [2024-11-06 12:46:34.371279] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BcXwFcJoAF': No such file or directory 00:40:02.834 [2024-11-06 12:46:34.371301] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:02.834 [2024-11-06 12:46:34.371316] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:02.834 [2024-11-06 12:46:34.371322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:02.834 [2024-11-06 12:46:34.371329] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:02.834 [2024-11-06 12:46:34.371335] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:02.834 request: 00:40:02.834 { 00:40:02.834 "name": "nvme0", 00:40:02.834 "trtype": "tcp", 00:40:02.834 "traddr": "127.0.0.1", 00:40:02.834 "adrfam": "ipv4", 00:40:02.834 "trsvcid": "4420", 00:40:02.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:02.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:02.834 "prchk_reftag": false, 00:40:02.834 "prchk_guard": false, 00:40:02.834 "hdgst": false, 00:40:02.834 "ddgst": false, 00:40:02.834 "psk": "key0", 00:40:02.834 "allow_unrecognized_csi": false, 00:40:02.834 "method": "bdev_nvme_attach_controller", 00:40:02.834 "req_id": 1 00:40:02.834 } 00:40:02.834 Got JSON-RPC error response 00:40:02.834 response: 00:40:02.834 { 00:40:02.834 "code": -19, 00:40:02.834 "message": "No such device" 00:40:02.834 } 00:40:02.834 12:46:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:40:02.834 12:46:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:02.834 12:46:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:02.834 12:46:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:02.834 12:46:34 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:02.834 12:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:03.092 12:46:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1xPjPYVYuL 00:40:03.092 12:46:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:03.092 12:46:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:03.349 12:46:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1xPjPYVYuL 00:40:03.349 12:46:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1xPjPYVYuL 00:40:03.349 12:46:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1xPjPYVYuL 00:40:03.349 12:46:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xPjPYVYuL 00:40:03.349 12:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1xPjPYVYuL 00:40:03.608 12:46:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:03.608 12:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:03.866 nvme0n1 00:40:03.866 12:46:35 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:03.866 12:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:03.866 12:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:03.866 12:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:03.866 12:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:03.866 12:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:04.124 12:46:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:04.124 12:46:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:04.124 12:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:04.383 12:46:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:04.383 12:46:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:04.383 12:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:04.383 12:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:04.383 12:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:04.641 12:46:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:04.641 12:46:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:04.641 12:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:04.641 12:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:04.641 12:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:04.641 12:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:04.641 12:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:04.899 12:46:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:04.899 12:46:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:04.899 12:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:05.157 12:46:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:05.157 12:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.157 12:46:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:05.415 12:46:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:05.415 12:46:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1xPjPYVYuL 00:40:05.415 12:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1xPjPYVYuL 00:40:05.673 12:46:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N2rJ7ZPO28 00:40:05.673 12:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N2rJ7ZPO28 00:40:05.935 12:46:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:05.935 12:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:06.292 nvme0n1 00:40:06.292 12:46:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:06.292 12:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:06.597 12:46:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:06.597 "subsystems": [ 00:40:06.597 { 00:40:06.597 "subsystem": "keyring", 00:40:06.597 "config": [ 00:40:06.597 { 00:40:06.597 "method": "keyring_file_add_key", 00:40:06.597 "params": { 00:40:06.597 "name": "key0", 00:40:06.597 "path": "/tmp/tmp.1xPjPYVYuL" 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "keyring_file_add_key", 00:40:06.597 "params": { 00:40:06.597 "name": "key1", 00:40:06.597 "path": "/tmp/tmp.N2rJ7ZPO28" 00:40:06.597 } 00:40:06.597 } 00:40:06.597 ] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "iobuf", 00:40:06.597 "config": [ 00:40:06.597 { 00:40:06.597 "method": "iobuf_set_options", 00:40:06.597 "params": { 00:40:06.597 "small_pool_count": 8192, 00:40:06.597 "large_pool_count": 1024, 00:40:06.597 "small_bufsize": 8192, 00:40:06.597 "large_bufsize": 135168, 00:40:06.597 "enable_numa": false 00:40:06.597 } 00:40:06.597 } 00:40:06.597 ] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "sock", 00:40:06.597 "config": [ 00:40:06.597 { 00:40:06.597 "method": "sock_set_default_impl", 00:40:06.597 "params": { 00:40:06.597 "impl_name": "posix" 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "sock_impl_set_options", 00:40:06.597 "params": { 00:40:06.597 "impl_name": "ssl", 00:40:06.597 "recv_buf_size": 4096, 00:40:06.597 "send_buf_size": 4096, 00:40:06.597 "enable_recv_pipe": true, 00:40:06.597 "enable_quickack": false, 00:40:06.597 "enable_placement_id": 0, 00:40:06.597 "enable_zerocopy_send_server": true, 00:40:06.597 "enable_zerocopy_send_client": false, 00:40:06.597 "zerocopy_threshold": 0, 00:40:06.597 "tls_version": 0, 00:40:06.597 "enable_ktls": false 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "sock_impl_set_options", 00:40:06.597 "params": { 00:40:06.597 "impl_name": "posix", 00:40:06.597 "recv_buf_size": 2097152, 00:40:06.597 "send_buf_size": 2097152, 00:40:06.597 "enable_recv_pipe": true, 00:40:06.597 "enable_quickack": false, 00:40:06.597 "enable_placement_id": 0, 00:40:06.597 "enable_zerocopy_send_server": true, 00:40:06.597 "enable_zerocopy_send_client": false, 00:40:06.597 "zerocopy_threshold": 0, 00:40:06.597 "tls_version": 0, 00:40:06.597 "enable_ktls": false 00:40:06.597 } 00:40:06.597 } 00:40:06.597 ] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "vmd", 00:40:06.597 "config": [] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "accel", 00:40:06.597 "config": [ 00:40:06.597 { 00:40:06.597 "method": "accel_set_options", 00:40:06.597 "params": { 00:40:06.597 "small_cache_size": 128, 00:40:06.597 "large_cache_size": 16, 00:40:06.597 "task_count": 2048, 00:40:06.597 "sequence_count": 2048, 00:40:06.597 "buf_count": 2048 00:40:06.597 } 00:40:06.597 } 00:40:06.597 ] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "bdev", 00:40:06.597 "config": [ 00:40:06.597 { 00:40:06.597 "method": "bdev_set_options", 00:40:06.597 "params": { 00:40:06.597 "bdev_io_pool_size": 65535, 00:40:06.597 "bdev_io_cache_size": 256, 00:40:06.597 "bdev_auto_examine": true, 00:40:06.597 "iobuf_small_cache_size": 128, 00:40:06.597 "iobuf_large_cache_size": 16 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_raid_set_options", 00:40:06.597 "params": { 00:40:06.597 "process_window_size_kb": 1024, 00:40:06.597 "process_max_bandwidth_mb_sec": 0 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_iscsi_set_options", 00:40:06.597 "params": { 00:40:06.597 "timeout_sec": 30 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_nvme_set_options", 00:40:06.597 "params": { 00:40:06.597 "action_on_timeout": "none", 00:40:06.597 "timeout_us": 0, 00:40:06.597 "timeout_admin_us": 0, 00:40:06.597 "keep_alive_timeout_ms": 10000, 00:40:06.597 "arbitration_burst": 0, 00:40:06.597 "low_priority_weight": 0, 00:40:06.597 "medium_priority_weight": 0, 00:40:06.597 "high_priority_weight": 0, 00:40:06.597 "nvme_adminq_poll_period_us": 10000, 00:40:06.597 "nvme_ioq_poll_period_us": 0, 00:40:06.597 "io_queue_requests": 512, 00:40:06.597 "delay_cmd_submit": true, 00:40:06.597 "transport_retry_count": 4, 00:40:06.597 "bdev_retry_count": 3, 00:40:06.597 "transport_ack_timeout": 0, 00:40:06.597 "ctrlr_loss_timeout_sec": 0, 00:40:06.597 "reconnect_delay_sec": 0, 00:40:06.597 "fast_io_fail_timeout_sec": 0, 00:40:06.597 "disable_auto_failback": false, 00:40:06.597 "generate_uuids": false, 00:40:06.597 "transport_tos": 0, 00:40:06.597 "nvme_error_stat": false, 00:40:06.597 "rdma_srq_size": 0, 00:40:06.597 "io_path_stat": false, 00:40:06.597 "allow_accel_sequence": false, 00:40:06.597 "rdma_max_cq_size": 0, 00:40:06.597 "rdma_cm_event_timeout_ms": 0, 00:40:06.597 "dhchap_digests": [ 00:40:06.597 "sha256", 00:40:06.597 "sha384", 00:40:06.597 "sha512" 00:40:06.597 ], 00:40:06.597 "dhchap_dhgroups": [ 00:40:06.597 "null", 00:40:06.597 "ffdhe2048", 00:40:06.597 "ffdhe3072", 00:40:06.597 "ffdhe4096", 00:40:06.597 "ffdhe6144", 00:40:06.597 "ffdhe8192" 00:40:06.597 ] 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_nvme_attach_controller", 00:40:06.597 "params": { 00:40:06.597 "name": "nvme0", 00:40:06.597 "trtype": "TCP", 00:40:06.597 "adrfam": "IPv4", 00:40:06.597 "traddr": "127.0.0.1", 00:40:06.597 "trsvcid": "4420", 00:40:06.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:06.597 "prchk_reftag": false, 00:40:06.597 "prchk_guard": false, 00:40:06.597 "ctrlr_loss_timeout_sec": 0, 00:40:06.597 "reconnect_delay_sec": 0, 00:40:06.597 "fast_io_fail_timeout_sec": 0, 00:40:06.597 "psk": "key0", 00:40:06.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:06.597 "hdgst": false, 00:40:06.597 "ddgst": false, 00:40:06.597 "multipath": "multipath" 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_nvme_set_hotplug", 00:40:06.597 "params": { 00:40:06.597 "period_us": 100000, 00:40:06.597 "enable": false 00:40:06.597 } 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "method": "bdev_wait_for_examine" 00:40:06.597 } 00:40:06.597 ] 00:40:06.597 }, 00:40:06.597 { 00:40:06.597 "subsystem": "nbd", 00:40:06.598 "config": [] 00:40:06.598 } 00:40:06.598 ] 00:40:06.598 }' 00:40:06.598 12:46:38 keyring_file -- keyring/file.sh@115 -- # killprocess 485765 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 485765 ']' 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@956 -- # kill -0 485765 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 485765 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 485765' 00:40:06.598 killing process with pid 485765 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@971 -- # kill 485765 00:40:06.598 Received shutdown signal, test time was about 1.000000 seconds 00:40:06.598 00:40:06.598 Latency(us) 00:40:06.598 [2024-11-06T11:46:38.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.598 [2024-11-06T11:46:38.213Z] =================================================================================================================== 00:40:06.598 [2024-11-06T11:46:38.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:06.598 12:46:38 keyring_file -- common/autotest_common.sh@976 -- # wait 485765 00:40:06.984 12:46:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=487753 00:40:06.984 12:46:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 487753 /var/tmp/bperf.sock 00:40:06.984 12:46:38 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 487753 ']' 00:40:06.984 12:46:38 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:06.984 12:46:38 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:06.984 12:46:38 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:06.984 12:46:38 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:06.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:06.984 12:46:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:06.984 "subsystems": [ 00:40:06.984 { 00:40:06.984 "subsystem": "keyring", 00:40:06.984 "config": [ 00:40:06.984 { 00:40:06.984 "method": "keyring_file_add_key", 00:40:06.984 "params": { 00:40:06.984 "name": "key0", 00:40:06.984 "path": "/tmp/tmp.1xPjPYVYuL" 00:40:06.984 } 00:40:06.984 }, 00:40:06.984 { 00:40:06.984 "method": "keyring_file_add_key", 00:40:06.984 "params": { 00:40:06.984 "name": "key1", 00:40:06.984 "path": "/tmp/tmp.N2rJ7ZPO28" 00:40:06.984 } 00:40:06.984 } 00:40:06.984 ] 00:40:06.984 }, 00:40:06.984 { 00:40:06.984 "subsystem": "iobuf", 00:40:06.984 "config": [ 00:40:06.984 { 00:40:06.984 "method": "iobuf_set_options", 00:40:06.984 "params": { 00:40:06.984 "small_pool_count": 8192, 00:40:06.984 "large_pool_count": 1024, 00:40:06.984 "small_bufsize": 8192, 00:40:06.984 "large_bufsize": 135168, 00:40:06.984 "enable_numa": false 00:40:06.984 } 00:40:06.984 } 00:40:06.984 ] 00:40:06.984 }, 00:40:06.984 { 00:40:06.984 "subsystem": "sock", 00:40:06.985 "config": [ 00:40:06.985 { 00:40:06.985 "method": "sock_set_default_impl", 00:40:06.985 "params": { 00:40:06.985 "impl_name": "posix" 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "sock_impl_set_options", 00:40:06.985 "params": { 00:40:06.985 "impl_name": "ssl", 00:40:06.985 "recv_buf_size": 4096, 00:40:06.985 "send_buf_size": 4096, 00:40:06.985 "enable_recv_pipe": true, 00:40:06.985 "enable_quickack": false, 00:40:06.985 "enable_placement_id": 0, 00:40:06.985 "enable_zerocopy_send_server": true, 00:40:06.985 "enable_zerocopy_send_client": false, 00:40:06.985 "zerocopy_threshold": 0, 00:40:06.985 "tls_version": 0, 00:40:06.985 "enable_ktls": false 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "sock_impl_set_options", 00:40:06.985 "params": { 00:40:06.985 "impl_name": "posix", 00:40:06.985 "recv_buf_size": 2097152, 00:40:06.985 "send_buf_size": 2097152, 00:40:06.985 "enable_recv_pipe": true, 00:40:06.985 "enable_quickack": false, 00:40:06.985 "enable_placement_id": 0, 00:40:06.985 "enable_zerocopy_send_server": true, 00:40:06.985 "enable_zerocopy_send_client": false, 00:40:06.985 "zerocopy_threshold": 0, 00:40:06.985 "tls_version": 0, 00:40:06.985 "enable_ktls": false 00:40:06.985 } 00:40:06.985 } 00:40:06.985 ] 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "subsystem": "vmd", 00:40:06.985 "config": [] 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "subsystem": "accel", 00:40:06.985 "config": [ 00:40:06.985 { 00:40:06.985 "method": "accel_set_options", 00:40:06.985 "params": { 00:40:06.985 "small_cache_size": 128, 00:40:06.985 "large_cache_size": 16, 00:40:06.985 "task_count": 2048, 00:40:06.985 "sequence_count": 2048, 00:40:06.985 "buf_count": 2048 00:40:06.985 } 00:40:06.985 } 00:40:06.985 ] 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "subsystem": "bdev", 00:40:06.985 "config": [ 00:40:06.985 { 00:40:06.985 "method": "bdev_set_options", 00:40:06.985 "params": { 00:40:06.985 "bdev_io_pool_size": 65535, 00:40:06.985 "bdev_io_cache_size": 256, 00:40:06.985 "bdev_auto_examine": true, 00:40:06.985 "iobuf_small_cache_size": 128, 00:40:06.985 "iobuf_large_cache_size": 16 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_raid_set_options", 00:40:06.985 "params": { 00:40:06.985 "process_window_size_kb": 1024, 00:40:06.985 "process_max_bandwidth_mb_sec": 0 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_iscsi_set_options", 00:40:06.985 "params": { 00:40:06.985 "timeout_sec": 30 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_nvme_set_options", 00:40:06.985 "params": { 00:40:06.985 "action_on_timeout": "none", 00:40:06.985 "timeout_us": 0, 00:40:06.985 "timeout_admin_us": 0, 00:40:06.985 "keep_alive_timeout_ms": 10000, 00:40:06.985 "arbitration_burst": 0, 00:40:06.985 "low_priority_weight": 0, 00:40:06.985 "medium_priority_weight": 0, 00:40:06.985 "high_priority_weight": 0, 00:40:06.985 "nvme_adminq_poll_period_us": 10000, 00:40:06.985 "nvme_ioq_poll_period_us": 0, 00:40:06.985 "io_queue_requests": 512, 00:40:06.985 "delay_cmd_submit": true, 00:40:06.985 "transport_retry_count": 4, 00:40:06.985 "bdev_retry_count": 3, 00:40:06.985 "transport_ack_timeout": 0, 00:40:06.985 "ctrlr_loss_timeout_sec": 0, 00:40:06.985 "reconnect_delay_sec": 0, 00:40:06.985 "fast_io_fail_timeout_sec": 0, 00:40:06.985 "disable_auto_failback": false, 00:40:06.985 "generate_uuids": false, 00:40:06.985 "transport_tos": 0, 00:40:06.985 "nvme_error_stat": false, 00:40:06.985 "rdma_srq_size": 0, 00:40:06.985 "io_path_stat": false, 00:40:06.985 "allow_accel_sequence": false, 00:40:06.985 "rdma_max_cq_size": 0, 00:40:06.985 "rdma_cm_event_timeout_ms": 0, 00:40:06.985 "dhchap_digests": [ 00:40:06.985 "sha256", 00:40:06.985 "sha384", 00:40:06.985 "sha512" 00:40:06.985 ], 00:40:06.985 "dhchap_dhgroups": [ 00:40:06.985 "null", 00:40:06.985 "ffdhe2048", 00:40:06.985 "ffdhe3072", 00:40:06.985 "ffdhe4096", 00:40:06.985 "ffdhe6144", 00:40:06.985 "ffdhe8192" 00:40:06.985 ] 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_nvme_attach_controller", 00:40:06.985 "params": { 00:40:06.985 "name": "nvme0", 00:40:06.985 "trtype": "TCP", 00:40:06.985 "adrfam": "IPv4", 00:40:06.985 "traddr": "127.0.0.1", 00:40:06.985 "trsvcid": "4420", 00:40:06.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:06.985 "prchk_reftag": false, 00:40:06.985 "prchk_guard": false, 00:40:06.985 "ctrlr_loss_timeout_sec": 0, 00:40:06.985 "reconnect_delay_sec": 0, 00:40:06.985 "fast_io_fail_timeout_sec": 0, 00:40:06.985 "psk": "key0", 00:40:06.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:06.985 "hdgst": false, 00:40:06.985 "ddgst": false, 00:40:06.985 "multipath": "multipath" 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_nvme_set_hotplug", 00:40:06.985 "params": { 00:40:06.985 "period_us": 100000, 00:40:06.985 "enable": false 00:40:06.985 } 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "method": "bdev_wait_for_examine" 00:40:06.985 } 00:40:06.985 ] 00:40:06.985 }, 00:40:06.985 { 00:40:06.985 "subsystem": "nbd", 00:40:06.985 "config": [] 00:40:06.985 } 00:40:06.985 ] 00:40:06.985 }' 00:40:06.985 12:46:38 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:06.985 12:46:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:06.985 [2024-11-06 12:46:38.395259] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:40:06.985 [2024-11-06 12:46:38.395319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487753 ] 00:40:06.985 [2024-11-06 12:46:38.463234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.985 [2024-11-06 12:46:38.503678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.243 [2024-11-06 12:46:38.661890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:07.243 12:46:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:07.243 12:46:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:40:07.243 12:46:38 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:07.243 12:46:38 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:07.243 12:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.501 12:46:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:07.501 12:46:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:07.501 12:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:07.501 12:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:07.501 12:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.501 12:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:07.501 12:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.759 12:46:39 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:07.759 12:46:39 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:07.759 12:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:07.759 12:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:07.759 12:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:07.759 12:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.759 12:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.017 12:46:39 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:08.017 12:46:39 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:08.017 12:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:08.017 12:46:39 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:08.274 12:46:39 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:08.274 12:46:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:08.274 12:46:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1xPjPYVYuL /tmp/tmp.N2rJ7ZPO28 00:40:08.274 12:46:39 keyring_file -- keyring/file.sh@20 -- # killprocess 487753 00:40:08.274 12:46:39 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 487753 ']' 00:40:08.274 12:46:39 keyring_file -- common/autotest_common.sh@956 -- # kill -0 487753 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 487753 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 487753' 00:40:08.536 killing process with pid 487753 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@971 -- # kill 487753 00:40:08.536 Received shutdown signal, test time was about 1.000000 seconds 00:40:08.536 00:40:08.536 Latency(us) 00:40:08.536 [2024-11-06T11:46:40.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.536 [2024-11-06T11:46:40.151Z] =================================================================================================================== 00:40:08.536 [2024-11-06T11:46:40.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:08.536 12:46:39 keyring_file -- common/autotest_common.sh@976 -- # wait 487753 00:40:08.536 12:46:40 keyring_file -- keyring/file.sh@21 -- # killprocess 485732 00:40:08.536 12:46:40 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 485732 ']' 00:40:08.536 12:46:40 keyring_file -- common/autotest_common.sh@956 -- # kill -0 485732 00:40:08.536 12:46:40 keyring_file -- common/autotest_common.sh@957 -- # uname 00:40:08.536 12:46:40 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:08.536 12:46:40 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 485732 00:40:08.794 12:46:40 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:08.794 12:46:40 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:08.794 12:46:40 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 485732' 00:40:08.794 killing process with pid 485732 00:40:08.794 12:46:40 keyring_file -- common/autotest_common.sh@971 -- # kill 485732 00:40:08.794 12:46:40 keyring_file -- common/autotest_common.sh@976 -- # wait 485732 00:40:09.052 00:40:09.052 real 0m14.092s 00:40:09.052 user 0m35.990s 00:40:09.052 sys 0m3.132s 00:40:09.052 12:46:40 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:09.052 12:46:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:09.052 ************************************ 00:40:09.052 END TEST keyring_file 00:40:09.052 ************************************ 00:40:09.052 12:46:40 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:40:09.052 12:46:40 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:09.052 12:46:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:09.052 12:46:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:09.052 12:46:40 -- common/autotest_common.sh@10 -- # set +x 00:40:09.052 ************************************ 00:40:09.052 START TEST keyring_linux 00:40:09.052 ************************************ 00:40:09.052 12:46:40 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:09.052 Joined session keyring: 31159325 00:40:09.310 * Looking for test storage... 00:40:09.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.310 12:46:40 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:09.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.310 --rc genhtml_branch_coverage=1 00:40:09.310 --rc genhtml_function_coverage=1 00:40:09.310 --rc genhtml_legend=1 00:40:09.310 --rc geninfo_all_blocks=1 00:40:09.310 --rc geninfo_unexecuted_blocks=1 00:40:09.310 00:40:09.310 ' 00:40:09.310 12:46:40 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:09.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.310 --rc genhtml_branch_coverage=1 00:40:09.310 --rc genhtml_function_coverage=1 00:40:09.310 --rc genhtml_legend=1 00:40:09.310 --rc geninfo_all_blocks=1 00:40:09.310 --rc geninfo_unexecuted_blocks=1 00:40:09.310 00:40:09.310 ' 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.311 --rc genhtml_branch_coverage=1 00:40:09.311 --rc genhtml_function_coverage=1 00:40:09.311 --rc genhtml_legend=1 00:40:09.311 --rc geninfo_all_blocks=1 00:40:09.311 --rc geninfo_unexecuted_blocks=1 00:40:09.311 00:40:09.311 ' 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.311 --rc genhtml_branch_coverage=1 00:40:09.311 --rc genhtml_function_coverage=1 00:40:09.311 --rc genhtml_legend=1 00:40:09.311 --rc geninfo_all_blocks=1 00:40:09.311 --rc geninfo_unexecuted_blocks=1 00:40:09.311 00:40:09.311 ' 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.311 12:46:40 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.311 12:46:40 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.311 12:46:40 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.311 12:46:40 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.311 12:46:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.311 12:46:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.311 12:46:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.311 12:46:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:09.311 12:46:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:09.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:09.311 /tmp/:spdk-test:key0 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:09.311 12:46:40 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:09.311 12:46:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:09.311 /tmp/:spdk-test:key1 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=488365 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 488365 00:40:09.311 12:46:40 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 488365 ']' 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:09.311 12:46:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:09.570 [2024-11-06 12:46:40.963003] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:40:09.570 [2024-11-06 12:46:40.963064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488365 ] 00:40:09.570 [2024-11-06 12:46:41.057416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.570 [2024-11-06 12:46:41.107199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:09.828 [2024-11-06 12:46:41.345180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:09.828 null0 00:40:09.828 [2024-11-06 12:46:41.377217] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:09.828 [2024-11-06 12:46:41.377663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:09.828 269415253 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:09.828 1044692684 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=488381 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:09.828 12:46:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 488381 /var/tmp/bperf.sock 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 488381 ']' 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:09.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:09.828 12:46:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:10.086 [2024-11-06 12:46:41.455382] Starting SPDK v25.01-pre git sha1 81757caea / DPDK 24.03.0 initialization... 00:40:10.086 [2024-11-06 12:46:41.455439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488381 ] 00:40:10.086 [2024-11-06 12:46:41.521133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.086 [2024-11-06 12:46:41.561217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.086 12:46:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:10.086 12:46:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:40:10.086 12:46:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:10.086 12:46:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:10.344 12:46:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:10.344 12:46:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:10.910 12:46:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:10.910 12:46:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:11.169 [2024-11-06 12:46:42.544039] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:11.169 nvme0n1 00:40:11.169 12:46:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:11.169 12:46:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:11.169 12:46:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:11.169 12:46:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:11.169 12:46:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:11.169 12:46:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.427 12:46:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:11.427 12:46:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:11.427 12:46:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:11.427 12:46:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:11.427 12:46:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:11.427 12:46:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:11.427 12:46:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@25 -- # sn=269415253 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 269415253 == \2\6\9\4\1\5\2\5\3 ]] 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 269415253 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:11.685 12:46:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:11.942 Running I/O for 1 seconds... 00:40:12.874 12500.00 IOPS, 48.83 MiB/s 00:40:12.875 Latency(us) 00:40:12.875 [2024-11-06T11:46:44.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.875 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:12.875 nvme0n1 : 1.01 12503.30 48.84 0.00 0.00 10183.19 7357.91 16562.73 00:40:12.875 [2024-11-06T11:46:44.490Z] =================================================================================================================== 00:40:12.875 [2024-11-06T11:46:44.490Z] Total : 12503.30 48.84 0.00 0.00 10183.19 7357.91 16562.73 00:40:12.875 { 00:40:12.875 "results": [ 00:40:12.875 { 00:40:12.875 "job": "nvme0n1", 00:40:12.875 "core_mask": "0x2", 00:40:12.875 "workload": "randread", 00:40:12.875 "status": "finished", 00:40:12.875 "queue_depth": 128, 00:40:12.875 "io_size": 4096, 00:40:12.875 "runtime": 1.010053, 00:40:12.875 "iops": 12503.304282052526, 00:40:12.875 "mibps": 48.84103235176768, 00:40:12.875 "io_failed": 0, 00:40:12.875 "io_timeout": 0, 00:40:12.875 "avg_latency_us": 10183.190441624256, 00:40:12.875 "min_latency_us": 7357.905454545455, 00:40:12.875 "max_latency_us": 16562.734545454547 00:40:12.875 } 00:40:12.875 ], 00:40:12.875 "core_count": 1 00:40:12.875 } 00:40:12.875 12:46:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:12.875 12:46:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:13.132 12:46:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:13.132 12:46:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:13.132 12:46:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:13.132 12:46:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:13.132 12:46:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:13.132 12:46:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.391 12:46:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:13.391 12:46:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:13.391 12:46:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:13.391 12:46:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:13.391 12:46:44 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:13.391 12:46:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:13.649 [2024-11-06 12:46:45.160268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:13.649 [2024-11-06 12:46:45.160523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1cf40 (107): Transport endpoint is not connected 00:40:13.649 [2024-11-06 12:46:45.161516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1cf40 (9): Bad file descriptor 00:40:13.649 [2024-11-06 12:46:45.162518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:13.649 [2024-11-06 12:46:45.162526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:13.649 [2024-11-06 12:46:45.162533] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:13.649 [2024-11-06 12:46:45.162541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:13.649 request: 00:40:13.649 { 00:40:13.649 "name": "nvme0", 00:40:13.649 "trtype": "tcp", 00:40:13.649 "traddr": "127.0.0.1", 00:40:13.649 "adrfam": "ipv4", 00:40:13.649 "trsvcid": "4420", 00:40:13.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:13.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:13.650 "prchk_reftag": false, 00:40:13.650 "prchk_guard": false, 00:40:13.650 "hdgst": false, 00:40:13.650 "ddgst": false, 00:40:13.650 "psk": ":spdk-test:key1", 00:40:13.650 "allow_unrecognized_csi": false, 00:40:13.650 "method": "bdev_nvme_attach_controller", 00:40:13.650 "req_id": 1 00:40:13.650 } 00:40:13.650 Got JSON-RPC error response 00:40:13.650 response: 00:40:13.650 { 00:40:13.650 "code": -5, 00:40:13.650 "message": "Input/output error" 00:40:13.650 } 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@33 -- # sn=269415253 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 269415253 00:40:13.650 1 links removed 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@33 -- # sn=1044692684 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1044692684 00:40:13.650 1 links removed 00:40:13.650 12:46:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 488381 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 488381 ']' 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 488381 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:13.650 12:46:45 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 488381 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 488381' 00:40:13.908 killing process with pid 488381 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@971 -- # kill 488381 00:40:13.908 Received shutdown signal, test time was about 1.000000 seconds 00:40:13.908 00:40:13.908 Latency(us) 00:40:13.908 [2024-11-06T11:46:45.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:13.908 [2024-11-06T11:46:45.523Z] =================================================================================================================== 00:40:13.908 [2024-11-06T11:46:45.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@976 -- # wait 488381 00:40:13.908 12:46:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 488365 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 488365 ']' 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 488365 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 488365 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 488365' 00:40:13.908 killing process with pid 488365 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@971 -- # kill 488365 00:40:13.908 12:46:45 keyring_linux -- common/autotest_common.sh@976 -- # wait 488365 00:40:14.474 00:40:14.474 real 0m5.222s 00:40:14.474 user 0m10.431s 00:40:14.474 sys 0m1.594s 00:40:14.474 12:46:45 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:14.474 12:46:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:14.474 ************************************ 00:40:14.474 END TEST keyring_linux 00:40:14.474 ************************************ 00:40:14.474 12:46:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:14.474 12:46:45 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:14.474 12:46:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:14.474 12:46:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:14.474 12:46:45 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:14.474 12:46:45 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:14.474 12:46:45 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:14.474 12:46:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:14.474 12:46:45 -- common/autotest_common.sh@10 -- # set +x 00:40:14.474 12:46:45 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:14.474 12:46:45 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:40:14.474 12:46:45 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:40:14.474 12:46:45 -- common/autotest_common.sh@10 -- # set +x 00:40:19.736 INFO: APP EXITING 00:40:19.736 INFO: killing all VMs 00:40:19.736 INFO: killing vhost app 00:40:19.736 WARN: no vhost pid file found 00:40:19.736 INFO: EXIT DONE 00:40:21.636 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:40:21.636 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:40:21.894 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:40:22.152 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:40:24.682 Cleaning 00:40:24.682 Removing: /var/run/dpdk/spdk0/config 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:24.682 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:24.682 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:24.682 Removing: /var/run/dpdk/spdk1/config 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:24.682 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:24.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:24.940 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:24.940 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:24.940 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:24.940 Removing: /var/run/dpdk/spdk2/config 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:24.940 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:24.940 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:24.940 Removing: /var/run/dpdk/spdk3/config 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:24.940 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:24.940 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:24.940 Removing: /var/run/dpdk/spdk4/config 00:40:24.940 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:24.941 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:24.941 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:24.941 Removing: /dev/shm/bdev_svc_trace.1 00:40:24.941 Removing: /dev/shm/nvmf_trace.0 00:40:24.941 Removing: /dev/shm/spdk_tgt_trace.pid4160398 00:40:24.941 Removing: /var/run/dpdk/spdk0 00:40:24.941 Removing: /var/run/dpdk/spdk1 00:40:24.941 Removing: /var/run/dpdk/spdk2 00:40:24.941 Removing: /var/run/dpdk/spdk3 00:40:24.941 Removing: /var/run/dpdk/spdk4 00:40:24.941 Removing: /var/run/dpdk/spdk_pid118743 00:40:24.941 Removing: /var/run/dpdk/spdk_pid124344 00:40:24.941 Removing: /var/run/dpdk/spdk_pid130850 00:40:24.941 Removing: /var/run/dpdk/spdk_pid137941 00:40:24.941 Removing: /var/run/dpdk/spdk_pid138002 00:40:24.941 Removing: /var/run/dpdk/spdk_pid139108 00:40:24.941 Removing: /var/run/dpdk/spdk_pid13911 00:40:24.941 Removing: /var/run/dpdk/spdk_pid140093 00:40:24.941 Removing: /var/run/dpdk/spdk_pid141333 00:40:24.941 Removing: /var/run/dpdk/spdk_pid141982 00:40:24.941 Removing: /var/run/dpdk/spdk_pid142111 00:40:24.941 Removing: /var/run/dpdk/spdk_pid142379 00:40:24.941 Removing: /var/run/dpdk/spdk_pid142488 00:40:24.941 Removing: /var/run/dpdk/spdk_pid142640 00:40:24.941 Removing: /var/run/dpdk/spdk_pid143435 00:40:24.941 Removing: /var/run/dpdk/spdk_pid144473 00:40:25.199 Removing: /var/run/dpdk/spdk_pid145398 00:40:25.199 Removing: /var/run/dpdk/spdk_pid146046 00:40:25.199 Removing: /var/run/dpdk/spdk_pid146059 00:40:25.199 Removing: /var/run/dpdk/spdk_pid146388 00:40:25.200 Removing: /var/run/dpdk/spdk_pid147929 00:40:25.200 Removing: /var/run/dpdk/spdk_pid149095 00:40:25.200 Removing: /var/run/dpdk/spdk_pid157791 00:40:25.200 Removing: /var/run/dpdk/spdk_pid196709 00:40:25.200 Removing: /var/run/dpdk/spdk_pid201790 00:40:25.200 Removing: /var/run/dpdk/spdk_pid20327 00:40:25.200 Removing: /var/run/dpdk/spdk_pid203491 00:40:25.200 Removing: /var/run/dpdk/spdk_pid205484 00:40:25.200 Removing: /var/run/dpdk/spdk_pid205702 00:40:25.200 Removing: /var/run/dpdk/spdk_pid205768 00:40:25.200 Removing: /var/run/dpdk/spdk_pid206034 00:40:25.200 Removing: /var/run/dpdk/spdk_pid206661 00:40:25.200 Removing: /var/run/dpdk/spdk_pid208704 00:40:25.200 Removing: /var/run/dpdk/spdk_pid209827 00:40:25.200 Removing: /var/run/dpdk/spdk_pid210391 00:40:25.200 Removing: /var/run/dpdk/spdk_pid212910 00:40:25.200 Removing: /var/run/dpdk/spdk_pid213687 00:40:25.200 Removing: /var/run/dpdk/spdk_pid214606 00:40:25.200 Removing: /var/run/dpdk/spdk_pid218990 00:40:25.200 Removing: /var/run/dpdk/spdk_pid224585 00:40:25.200 Removing: /var/run/dpdk/spdk_pid224586 00:40:25.200 Removing: /var/run/dpdk/spdk_pid224587 00:40:25.200 Removing: /var/run/dpdk/spdk_pid228578 00:40:25.200 Removing: /var/run/dpdk/spdk_pid23551 00:40:25.200 Removing: /var/run/dpdk/spdk_pid237291 00:40:25.200 Removing: /var/run/dpdk/spdk_pid241610 00:40:25.200 Removing: /var/run/dpdk/spdk_pid247884 00:40:25.200 Removing: /var/run/dpdk/spdk_pid249106 00:40:25.200 Removing: /var/run/dpdk/spdk_pid250840 00:40:25.200 Removing: /var/run/dpdk/spdk_pid252327 00:40:25.200 Removing: /var/run/dpdk/spdk_pid257126 00:40:25.200 Removing: /var/run/dpdk/spdk_pid261579 00:40:25.200 Removing: /var/run/dpdk/spdk_pid265914 00:40:25.200 Removing: /var/run/dpdk/spdk_pid273847 00:40:25.200 Removing: /var/run/dpdk/spdk_pid273849 00:40:25.200 Removing: /var/run/dpdk/spdk_pid278672 00:40:25.200 Removing: /var/run/dpdk/spdk_pid278928 00:40:25.200 Removing: /var/run/dpdk/spdk_pid279189 00:40:25.200 Removing: /var/run/dpdk/spdk_pid279712 00:40:25.200 Removing: /var/run/dpdk/spdk_pid279717 00:40:25.200 Removing: /var/run/dpdk/spdk_pid284353 00:40:25.200 Removing: /var/run/dpdk/spdk_pid284930 00:40:25.200 Removing: /var/run/dpdk/spdk_pid289584 00:40:25.200 Removing: /var/run/dpdk/spdk_pid292623 00:40:25.200 Removing: /var/run/dpdk/spdk_pid298314 00:40:25.200 Removing: /var/run/dpdk/spdk_pid304085 00:40:25.200 Removing: /var/run/dpdk/spdk_pid314311 00:40:25.200 Removing: /var/run/dpdk/spdk_pid322096 00:40:25.200 Removing: /var/run/dpdk/spdk_pid322098 00:40:25.200 Removing: /var/run/dpdk/spdk_pid342644 00:40:25.200 Removing: /var/run/dpdk/spdk_pid343432 00:40:25.200 Removing: /var/run/dpdk/spdk_pid343970 00:40:25.200 Removing: /var/run/dpdk/spdk_pid344694 00:40:25.200 Removing: /var/run/dpdk/spdk_pid345479 00:40:25.200 Removing: /var/run/dpdk/spdk_pid346137 00:40:25.200 Removing: /var/run/dpdk/spdk_pid346678 00:40:25.200 Removing: /var/run/dpdk/spdk_pid34741 00:40:25.200 Removing: /var/run/dpdk/spdk_pid347462 00:40:25.200 Removing: /var/run/dpdk/spdk_pid351757 00:40:25.200 Removing: /var/run/dpdk/spdk_pid352019 00:40:25.200 Removing: /var/run/dpdk/spdk_pid358387 00:40:25.200 Removing: /var/run/dpdk/spdk_pid358438 00:40:25.458 Removing: /var/run/dpdk/spdk_pid364109 00:40:25.458 Removing: /var/run/dpdk/spdk_pid368729 00:40:25.458 Removing: /var/run/dpdk/spdk_pid379687 00:40:25.458 Removing: /var/run/dpdk/spdk_pid380227 00:40:25.458 Removing: /var/run/dpdk/spdk_pid384517 00:40:25.458 Removing: /var/run/dpdk/spdk_pid384804 00:40:25.458 Removing: /var/run/dpdk/spdk_pid389351 00:40:25.458 Removing: /var/run/dpdk/spdk_pid3948 00:40:25.458 Removing: /var/run/dpdk/spdk_pid395500 00:40:25.458 Removing: /var/run/dpdk/spdk_pid398290 00:40:25.458 Removing: /var/run/dpdk/spdk_pid408642 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4157972 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4159179 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4160398 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4161097 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4162106 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4162190 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4163294 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4163561 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4163945 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4165792 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4167354 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4167673 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4168010 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4168348 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4168671 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4168955 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4169235 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4169557 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4169872 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4173522 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4173816 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4174104 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4174108 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4174660 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4174670 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4175062 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4175223 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4175575 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4175647 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4175953 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4176217 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4176719 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4176893 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4177552 00:40:25.458 Removing: /var/run/dpdk/spdk_pid417808 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4181749 00:40:25.458 Removing: /var/run/dpdk/spdk_pid4186302 00:40:25.458 Removing: /var/run/dpdk/spdk_pid420015 00:40:25.458 Removing: /var/run/dpdk/spdk_pid421064 00:40:25.458 Removing: /var/run/dpdk/spdk_pid438293 00:40:25.459 Removing: /var/run/dpdk/spdk_pid442341 00:40:25.459 Removing: /var/run/dpdk/spdk_pid445211 00:40:25.459 Removing: /var/run/dpdk/spdk_pid44662 00:40:25.459 Removing: /var/run/dpdk/spdk_pid4512 00:40:25.459 Removing: /var/run/dpdk/spdk_pid453270 00:40:25.459 Removing: /var/run/dpdk/spdk_pid453405 00:40:25.459 Removing: /var/run/dpdk/spdk_pid458481 00:40:25.459 Removing: /var/run/dpdk/spdk_pid460600 00:40:25.459 Removing: /var/run/dpdk/spdk_pid462709 00:40:25.459 Removing: /var/run/dpdk/spdk_pid464056 00:40:25.459 Removing: /var/run/dpdk/spdk_pid46504 00:40:25.459 Removing: /var/run/dpdk/spdk_pid466661 00:40:25.459 Removing: /var/run/dpdk/spdk_pid467880 00:40:25.459 Removing: /var/run/dpdk/spdk_pid47555 00:40:25.459 Removing: /var/run/dpdk/spdk_pid477072 00:40:25.717 Removing: /var/run/dpdk/spdk_pid477613 00:40:25.717 Removing: /var/run/dpdk/spdk_pid478276 00:40:25.717 Removing: /var/run/dpdk/spdk_pid480726 00:40:25.717 Removing: /var/run/dpdk/spdk_pid481263 00:40:25.717 Removing: /var/run/dpdk/spdk_pid481802 00:40:25.717 Removing: /var/run/dpdk/spdk_pid485732 00:40:25.717 Removing: /var/run/dpdk/spdk_pid485765 00:40:25.717 Removing: /var/run/dpdk/spdk_pid487753 00:40:25.717 Removing: /var/run/dpdk/spdk_pid488365 00:40:25.717 Removing: /var/run/dpdk/spdk_pid488381 00:40:25.717 Removing: /var/run/dpdk/spdk_pid65873 00:40:25.717 Removing: /var/run/dpdk/spdk_pid69956 00:40:25.717 Removing: /var/run/dpdk/spdk_pid9065 00:40:25.717 Removing: /var/run/dpdk/spdk_pid9357 00:40:25.717 Clean 00:40:25.717 12:46:57 -- common/autotest_common.sh@1451 -- # return 0 00:40:25.717 12:46:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:25.717 12:46:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:25.717 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:40:25.717 12:46:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:25.717 12:46:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:25.717 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:40:25.717 12:46:57 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:25.717 12:46:57 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:25.717 12:46:57 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:25.717 12:46:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:25.717 12:46:57 -- spdk/autotest.sh@394 -- # hostname 00:40:25.717 12:46:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:25.975 geninfo: WARNING: invalid characters removed from testname! 00:40:58.047 12:47:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:00.614 12:47:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:03.902 12:47:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:06.436 12:47:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:09.725 12:47:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:13.025 12:47:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:15.558 12:47:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:15.558 12:47:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:15.558 12:47:47 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:15.558 12:47:47 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:15.558 12:47:47 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:15.558 12:47:47 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:15.558 + [[ -n 4074522 ]] 00:41:15.558 + sudo kill 4074522 00:41:15.568 [Pipeline] } 00:41:15.584 [Pipeline] // stage 00:41:15.590 [Pipeline] } 00:41:15.607 [Pipeline] // timeout 00:41:15.612 [Pipeline] } 00:41:15.626 [Pipeline] // catchError 00:41:15.632 [Pipeline] } 00:41:15.648 [Pipeline] // wrap 00:41:15.654 [Pipeline] } 00:41:15.667 [Pipeline] // catchError 00:41:15.677 [Pipeline] stage 00:41:15.680 [Pipeline] { (Epilogue) 00:41:15.693 [Pipeline] catchError 00:41:15.695 [Pipeline] { 00:41:15.709 [Pipeline] echo 00:41:15.711 Cleanup processes 00:41:15.717 [Pipeline] sh 00:41:16.003 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:16.003 499570 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:16.016 [Pipeline] sh 00:41:16.299 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:16.299 ++ grep -v 'sudo pgrep' 00:41:16.299 ++ awk '{print $1}' 00:41:16.299 + sudo kill -9 00:41:16.299 + true 00:41:16.311 [Pipeline] sh 00:41:16.596 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:34.699 [Pipeline] sh 00:41:34.992 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:34.992 Artifacts sizes are good 00:41:35.007 [Pipeline] archiveArtifacts 00:41:35.015 Archiving artifacts 00:41:35.163 [Pipeline] sh 00:41:35.463 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:35.477 [Pipeline] cleanWs 00:41:35.486 [WS-CLEANUP] Deleting project workspace... 00:41:35.487 [WS-CLEANUP] Deferred wipeout is used... 00:41:35.493 [WS-CLEANUP] done 00:41:35.495 [Pipeline] } 00:41:35.512 [Pipeline] // catchError 00:41:35.524 [Pipeline] sh 00:41:35.840 + logger -p user.info -t JENKINS-CI 00:41:35.908 [Pipeline] } 00:41:35.921 [Pipeline] // stage 00:41:35.926 [Pipeline] } 00:41:35.941 [Pipeline] // node 00:41:35.946 [Pipeline] End of Pipeline 00:41:35.997 Finished: SUCCESS